Showing posts with label architecture. Show all posts
Showing posts with label architecture. Show all posts

Tuesday, May 17, 2016

Flux Architecture

Flux is quickly becoming the standard architecture for building large-scale applications. It's the topic of my latest book, Flux Architecture, available on Amazon.

Monday, June 25, 2012

Good Architects Write Code

Writing code makes the difference between a mediocre architect and an outstanding one. There are a number of factors that influence this thinking, least of which being, architects are essentially programmers with additional responsibilities. Part of thinking like a programmer is thinking in code and part of thing like an architect is thinking like a programmer. That is, as an architect, essentially what you're doing, no matter how many practices and procedures stand between yourself and the software, code must be written.

What is the best way to make this happen? After all, the big picture falls to the architect to sort out. All the stakeholder's expectations must be aligned with what we're actually able to produce. So, then, maybe I can help by contributing some code, or at the very least, reading some code and reviewing it. But does that leave time for my other important architectural activities. Too often, code and the process of curating it, is neglected by not just architects, but the development process as a whole. Developers recognize that the success of any modern organization rests heavily on the shoulders of it's technical foundation. It's ability to write and understand code.

When an architect suggests that they have other activities to perform — ones that seem to place code on the back-burner — its no wonder the negative stigma surrounding software architecture is so prevalent. Architects are just big picture guys, out to do nothing but draw diagrams without consideration of the technological challenges facing developers. That, and other negative perceptions about software architecture, could be partially resolved if architects wrote code.

Team Player
Writing code is a team sport — you try out, you make the team, you pass the ball around, you win, you lose. A narrow analogy might place the architect in the role of coach. Observing the game, commentating on events as they happen, providing feedback, and devising plans. That might work well if software development teams were structured the same as organized sports teams. We often don't have a fixed roster of developers. The teams we're up against are often composed of many more players than ours. Despite these limitations, there an obstruction in sports that doesn't exist in software development.

The architect can throw on a pair of skates, jump over the boards, and score a goal or two. It's as simple as that really. Being a team player, that is, getting your hands dirty and helping out with the code-level work will bode well for team morale. This is how, as an architect, you can earn some serious respect. Bend over backward to make life easier for programmers, and they'll return the favor, I assure you. Put this way, the architect is playing the role of captain and coach.

Now, this certainly isn't an easy feat. Architects can't simply ignore their duty of ensuring stakeholder satisfaction. As much fun as programming in the wild west can be, architects are programmers with additional responsibilities. Let me put it this way — you're still supposed to look after the architectural description, and basically ensure that the software system itself maintains it's architectural integrity. So no, don't forget about those things. Don't drop what you're doing and start your new programmer position just yet — these purely architectural day-to-day jobs are important. Make sure you do get involved with the code from time to time. Even if you're not capable of contributing functional components, ask questions. Earn some respect by taking an active interest in the nuts and bolts.

Nuts and Bolts
Beyond earning respect from developers, it is important that architects understand the nuts and bolts used to glue their system components together. Understanding the big picture and the impact it has on every stakeholder is the most important deliverable in any software architecture. This includes the nuts and bolts.

Developers that disregard the software architect role as nothing more than big up-front design have good reason for this attitude. If we're merely handing down ideals without sympathizing as to how difficult these implementation jobs actually are, than we're doing big up-front design while leaving reality as an afterthought. Reality cannot be an afterthought. Reality is the low-level details, the subtle but important code we need to write that realizes the requirements of the stakeholders.

As an architect, you're also a programmer — never forget that. Like it or not, if you don't understand the nuts and bolts, you had better make sure you allot time to study them. The low-level components have a big impact what what is ultimately delivered as a software product. This impact ripples from the bottom, all the way up to the users, touching on every use case we've put together that ensures the software can satisfy all requirements. If the implementation was executed well, these nuts and bolts will be encapsulated inside robust APIs that we've designed. As you know, abstractions only go so far in protecting ourselves against unanticipated behaviour of the nuts and bolts.

The software architect is in a good position to understand the quality of the system's abstractions and how vigorously they're prepared to defend against unexpected behaviour. To approach this job, I would look at things in two directions. First, the top down approach, looking at the requirements and conversing with the stakeholders who may or may not know what they want. This is the business side of the game, important to understanding what the software must do. For everything else, I would take the bottom up approach, starting with the nuts and bolts. Along this route, you'll eventually meet you're your business path. Eventually, not immediately — this is an iterative approach to software architecture.

Tuesday, April 6, 2010

Software Artifacts

What exactly are software artifacts? Put simply, a software artifact is a file that exists as part of some software system. The most common software artifacts are source code files and executable files. What isn't usually considered an artifact, however, is the logical design of the system.

It sounds strange that the system design isn't considered a software artifact but in most cases, it is the truth. The design of any software system is implicitly captured in the source code files. Because without the system design, there wouldn't be any source code files because they would have no reason to exist.

If the software system is modeled with UML, shouldn't the model files themselves be considered software artifacts. I would like to think so. Especially since they do a better job of making the system design explicit than the source code does.

Additionally, UML models can also make the software artifacts of the modeled system explicit. That is, you can have meta-artifacts. This of course assumes that the model itself is considered an artifact. If it sounds too confusing, it really doesn't have to be. UML artifact elements can help illuminate where in the model these artifacts, usually source code files, are expressed as a logical design.

Monday, October 26, 2009

GUI Controller Design

The introduction of graphical user interfaces, or GUIs, have made a huge impact on the way humans interact with computer software. The command line, or terminal, interface is intimidating to many people. You can't exactly do anything intuitively with the command line unless you have several years experience using it. With the GUI, widgets, the components that make up the screen that is displayed to the user, are designed in such a way that they users can infer how to interact with them. For instance, with a button widget in a GUI, it is more often than not, obvious that the button should be clicked. In addition to the actions that the user must take in order to interact with the interface, the GUI allows for descriptive text to be easily placed. This helps the user determine why this button should be clicked instead of that one.

On the development side of things, there is no shortage today of GUI libraries available for use. Most of these libraries are available free of charge as open source software. Also very popular these days is the web browser as an application GUI platform. This is simply because most machines have a web browser capable of rendering HTML. It makes sense to take this approach to reach the widest audience possible.

The GUI library of choice, be it Qt or the web browser, is just one layer in the GUI design structure. In fact, it is the lowest level. Beyond the GUI library layer, that is a lower level still, are all aspects that the application developer doesn't want to deal with. What about the opposite direction in the logical layout of the GUI design structure? The next layer up could potentially be the application controlling layer itself. In many applications, this is in fact how components are layered. But this may not always be ideal. It can be beneficial for design purposes to implement a facade type abstraction in between the application logic and the various GUI widgets that make up application GUI. Illustrated below are potential layers that might be used to tie the GUI to the application itself.



Here, the outermost layer are the App Controllers. This is the heart of the application logic. It is the brain of the program that lives here. Next, we have GUI Controllers. This is another abstraction created by developers for interacting with the GUI library. Finally, at the lowest layer sits the GUI Lib. With this layout, the application logic never interacts directly with the GUI library which is an ideal design trait. GUI controllers created by the developers of the application offer more flexibility in almost every way imaginable.

Firstly, the application logic doesn't need to concern itself with assembling the GUI. Chances are that a given GUI library isn't going to provide the screens that you want to display to your users. They do, however, provide all widgets required to make for a consistent look and feel in the GUI. It is the responsibility of the GUI controlling layer to assemble these GUI widgets in a coherent manor. Again, the application logic only needs to know that it needs to display something to the user. It asks the GUI controlling layer to carry out this task faithfully. There is also the potential for technology independence. If the application controlling layer is interacting directly with the GUI library, modifying the application to support another GUI library is going to be nearly impossible. If, however, this is the responsibility of the GUI controlling layer, this suddenly becomes feasible. Not only does this help with technological independence, but also with platform portability. Chances are that subtle differences in how the widgets are created and displayed will be necessary across platforms. This should be done by the GUI controlling layer and not the application layer as it should function as-is on any platform.

Illustrated below is an application controller and a GUI controller interacting. The idea here is to show that the application controllers do not interact directly with the GUI library. In addition, the application controller servers as a communication channel to other lower layers. For instance, here, the page widget data is retrieved from the database by the application controller. The application controller then sends a message to the GUI controller to construct a GUI component. It sends data retrieved from the database as part of the message.

Tuesday, October 20, 2009

Evaluating Development Processes

I read an interesting entry here about the software development process and it gave me a reminder of how simple, in fact, it can be. I was reminded of some of the common aspects of the software development life cycle and that they are all important for success, to varying degrees. The key thing is that variability.

Analysis, design, implementation, and testing seems to be the common factor in any aspect of the development process. Realistically, one can't create software without crossing each of these phases at least once. They all must occur, ideally longer than just briefly. But whatever the development process that is chosen by a given team, each of these phases is going to need evaluation in terms of time investment.

This is the part that comes even before a given project comes into existence for a software development team. This is where it is important to set a consistent time to be spent for each phase. But this isn't going to happen for the first project that is hammered out by a team. Trying to set an appropriate amount of time for each development phase is completely pointless. The first project is going to be trial and error. What is important is that once the team is able to find some time allotment that works, stay consistent. Consistency makes all stakeholders involved happy when it comes to timing. This includes the customers.

Friday, October 16, 2009

When To Generate Code

Code generation can be a blessing, a complete nightmare, or a combination of both for developers. It is a blessing when the right tools and the right know-how are employed. It is a nightmare when the wrong tool is used and much time has been vested in it. It is both a blessing and a nightmare when all appears to be going well until the maintenance of the code become unmanageable.

The whole point of using code generation in the first place is to eliminate the need for developers having to write mundane, boiler-plate code. Another use, although still considered boiler-plate in most circumstances, is generating GUI code. Many GUI builder tools allow for this in many programming languages.

Whether the boiler-plate code was generated by a UML modeling tool or by a GUI builder tool, the generated code should be imported by some other application module. This is necessary in order to promote isolation between the generate code and the human-designed code. The hand-crafted stuff created by a human developer usually doesn't interact well with the generated stuff. It is always going to make more sense to let the developer find a way to make the generated code work with the other application code. The reverse isn't true; the generated code isn't smart enough to work with the developer code.

So the use case for code generation is quite obvious; to save development time. In the case of developing user interface code, it is nearly impossible to maintain due to the level of verbosity. This is necessary and there really isn't any way around it other than to maintain the user interface graphically with a design tool that generates the code. So always generate GUI code, but always import it.

The classes associated with the problem domain generally store data and don't much behavior if any. These classes are good candidates for code generation. The reason being that the lack of behavior is a good thing when it comes to behavior. This is because the code that is being generated is a static artifact and the code it contains should be mostly conceptually static. Attempting to implement behavior inside a model that is then transformed into running code is a bad idea because method signatures aren't trivial to maintain and because behavior generally grows more complex.

Wednesday, October 14, 2009

Cloud Clients

The term cloud has many definitions in a computing context these days. Some refer to the various social networks as a cloud which I think is overly broad. Perhaps the best definition is the simplest; "a group of interconnected nodes". A "cloud" of nodes is a design construct, not a deployment one. The number of nodes and their respective locations should have no impact on the terminology used.

So what are the roles of these nodes that make up clouds? Typically, a node plays the role of a server. They act when they are requested to act. Users of these clouds are actually outside the cloud. The servers within the cloud then act on behalf of these client requests.

So is it possible to have these outside clients join into the cloud in order to share some of these computational resources? It certainly is and that is what peer-to-peer computing is all about. In this distributed computing model, the client is them most prevalent role in the entire system. Forget about the managers that allow these clients to discover one another. The manager are necessary but the clients taking on more than just a simple dummy role is what is interesting. It allows scale within the cloud to spread like disease.

Thursday, October 8, 2009

Self Assuring Code

The notion of self assuring code sounds a little like a spin on writing code that incorporates well-written exception handling. This is probably true for the most part but if exception handling can be taken up a notch, developers can build programs that are resilient beyond expectations.

One way to write self assuring code is to do just that; write exceptions handling code for just about every possible exception that might be raised. Exception handling works great for handling any type of exceptional occurrence such as type errors. Custom exceptions are another common construct in many exception handling implementations. The custom exception classes typically inherit from the primitive error types of the implementation language. What is useful about this approach is that the developer is extending the notion of what constitutes an exceptional occurrence.

Something that even the best exception handling cannot do is give the same type of feedback to the developer that is brought about by quality assurance that is brought about by human test users. This is especially effective if the users have never used the software before. This is because there is no bias involved. These users haven't had questionable experiences in certain parts of the application and are not lenient if it just barely works.

Is it even possible then, to have self assuring code? Can developers take exception handling to the next level and build it into the code? Think of trying to find a really difficult bug. What does the developer do? They put debug log messages in places that don't necessarily make sense. But as most developers know, it is these messages, these just by change debug notes in strange places that often end up solving the problem.

The question becomes, how elaborate must these exceptions become? Do they try and predict future failures best on how well the current execution is going? Maybe. This whole idea is very context dependent. But experimentation with creative exceptions might be worth exploring.

Monday, October 5, 2009

Simple Task Management

In distributed systems, tasks are often performed in parallel with one another. In these types of distributed systems, the task is an important abstraction. There are likely to be thousands or millions of task instances at any given time distributed amongst nodes. In order to achieve concurrency, it is important that these tasks be of reasonable size. Otherwise, there exist large non-interruptible regions that cannot execute in parallel with other tasks.

Another essential abstraction in a task manager design is the manager itself. Call it a task runner if that sounds better, the idea is that it is responsible for running tasks. Not just blindly running tasks either but maintaining order amongst all the tasks that are competing for attention. Tasks also need to be disposed of when they have completed running or are otherwise unable to run.

Implementing this type of distributed task management is hard. There are many ways to go about implementing something this complex and concurrent. My suggestion is to first design the most simplistic task management system conceivable. Then make inferences from it. An extremely simple structure of a task management system is illustrated below.



The Task class is specialized by the Search and Sort classes. This means that Search and Sort are types of tasks. The Runner class is associated with the Task class because it is responsible for running tasks as the name suggests. The Runner instance maintains a queue of tasks to run. Below is an example illustrating how a controller would create a task and ask the runner to execute it.



There is room in this simple design for concurrent events that will push tasks onto the Runner task queue. The Runner instance could also execute the tasks in parallel. The idea is to get the simple design right before even considering concurrency.

Friday, October 2, 2009

Polymorphism And Inheritance

One of the main key principles of the object-oriented software development paradigm is polymorphism. Polymorphism in the context of object-oriented design is the ability to define behavior that varies by type, as opposed to varying by interface. For instance, in functional programming design, to invoke different behavior, a different function name is invoked. In object-oriented design, behavior can be invoked on instances by using a single method name. This means that the behavior that is actually executed depends on the type.

Inheritance, another key principle of object-oriented design, plays a big role in implementing polymorphic behavior. Given a class hierarchy, the topmost classes will often define a base interface for behavior. These base methods often go unimplemented. It is the classes that inherit from the base class that are responsible for implementing the behavior. This means that subclasses even further down the inheritance hierarchy can also define this behavior.

Any descendants of the base class can be used in the same context and will behave as expected. Below is an example illustrating the difference between inheriting a method and providing an implementation for it and inheriting an already-defined method.
#Example; Polymorphism and inheritance.

#Do imports.
import uuid
import timeit

#Simple person class.
class Person(object):

#Constructor. Initialize the data.
def __init__(self):
self.data={"first_name":"FirstName",\
"last_name":"LastName",\
"id":uuid.uuid1()}

#Return the first name.
def get_first_name(self):
return "first_name_%s"%(self.data["first_name"])

#Return the last name.
def get_last_name(self):
return "last_name_%s"%(self.data["last_name"])

#Return the id.
def get_id(self):
raise NotImplementedError

#Simple manager class that extends Person.
class Manager(Person):

#Constructor. Initialize the Person class.
def __init__(self):
Person.__init__(self)

#Return the manager id.
def get_id(self):
return "manager_%s"%(self.data["id"])

#Simple employee class that extends Person.
class Employee(Person):

#Constructor. Initialize the Person class.
def __init__(self):
Person.__init__(self)

#Return the employee id.
def get_id(self):
return "employee_%s"%(self.data["id"])

#Main.
if __name__=="__main__":

#Employee.get_id() timer.
t_employee_get_id=timeit.Timer("Employee().get_id()",\
setup="from __main__ import Employee")

#Manager.get_id() timer.
t_manager_get_id=timeit.Timer("Manager().get_id()",\
setup="from __main__ import Manager")

#Employee.get_first_name() timer.
t_employee_get_first_name=timeit.Timer("Employee().get_first_name()",\
setup="from __main__ import Employee")

#Manager.get_first_name() timer.
t_manager_get_first_name=timeit.Timer("Manager().get_first_name()",\
setup="from __main__ import Manager")

#Display the results.
print "Employee Get ID: ", t_employee_get_id.timeit(10000)
print "Manager Get ID: ", t_manager_get_id.timeit(10000)
print "Employee Get First Name: ", t_employee_get_first_name.timeit(10000)
print "Manager Get First Name: ", t_manager_get_first_name.timeit(10000)

Tuesday, September 29, 2009

Instance Factory

A factory in object-oriented programming is a design pattern that creates instances of classes. There are variations on this pattern but for my purposes, I simply refer to it as an instance factory because that is essentially what it is used for. The factory takes the responsibility of directly instantiating a class from the context that uses the factory. That context may be some function or, more often, some method of another class. The factory itself is generally a class with several static or class methods. It is these methods that construct and return instances.

So if the developer can take the responsibility of directly instantiating some class away from the method they are currently working on, what do they gain? In this case, it isn't what they gain but what they lose. They lose the direct coupling to the class in question. In most cases, a given class is going to need to create more than one type of instance throughout its' lifetime. This means that there is a dependency between the class in question, serving as the context, and the other classes that it depends on. If the class in question requires only a factory, the class then becomes loosely-coupled. This is an important design factor.

The instance factory is essentially a proxy for the act of creation. It isn't a proxy for data but for behavior. This is the sole responsibility of the factory. If a developer can see a factory invocation in code, chances are that their guess as to what it does will be correct. Since the instance factory is so specialized, it will in turn help with the distribution of responsibilities where ever it is used.

Monday, September 28, 2009

User Authentication Design

Most systems today, in fact, any system today in which a user interacts with a system, will have some kind of user abstraction. Whether that abstraction is the incoming request for application data or a instantiated class that lives on the server, it nonetheless exists. More often than not, the application needs to know who this user is. It can then make decisions about what, if any, data this user can see or modify. This, of course, is authentication.

There are many different approaches taken to implement user authentication. Most web application frameworks have a built-in authentication system. If that authentication system is flexible enough, it will allow for an external authentication system to be used. This is often the route that is taken by any commercial application simply because systems that were designed to authenticate, often do it well. There is no need to reinvent the wheel. Another reason for doing this might be performance.

However, most simple applications, often web applications, need only simple authentication. By simple, I mean they don't need a production-ready authentication system that can simultaneously handle millions of users. This isn't necessary and would be a waste of time. In these scenarios, simple HTTP authentication will be enough to allow the application to behave as required.

Even simple authentication needs to be designed. There are many approaches that can be taken to implement the underlying authentication one of which is a self-authenticating user abstraction. The user abstraction is necessary no matter what and should always be present in any design. The self-authenticating approach means that the authentication activity is performed by the user abstraction itself, with no need to invoke a separate party. The structure of such an abstraction is illustrated below.



Once an application receives a request for authentication to happen, the user abstraction is instantiated. Once instantiated, this abstraction is then passed the necessary data in order to authenticate itself. The result of the authentication is then passed to the controller responsible for instantiating the user. This sequence is illustrated below.



There are obvious benefits and drawbacks to using such an approach. The drawback is that the user abstraction is instantiated regardless of what the authentication outcome is. This is because the authentication can't happen without a user instance. Should a user instance, even if only momentarily, exist if not authenticated? The benefit here is the self containment. There is no need for an external authentication system since the user is able to state to the system whether or not it is who it says it is. Of course, this may not even be a good thing. An authentication system may be a desired design element.

Thursday, September 24, 2009

Loose Coupling Decorators

Many programming languages offer the ability to decorate certain constructs such as functions or methods. What exactly is a decoration in this context? It is referred to as a decorator because the function or method declaration looks as though it is being decorated. The name is syntactically descriptive. So, having said this, what exactly is a decorator? In Python, a decorator is essentially a function that is used to take an existing function or method and return a transformed version of it. The @ symbol denotes a decorator in Python and is placed above the function or method definition.

So how does this help the developer? Why would they want to take a perfectly normal function definition and put some strange syntax around it? The main purpose for this being that it serves as a factory that can inject objects into the function from other name spaces. This is useful because it allows some developers to define decorators and others can define functions and methods that use them. This supports loose-coupling because the same decorators can be used for many functions. Additionally, a function may be decorated by many decorators.

The effect here is similar that of inheritance. However, these can be difficult to achieve sometimes with inheritance if a strict hierarchy isn't thought up from the onset. The polymorphic operations all need to be consistent with one another using this approach.

The inheritance approach to loose-coupling is probably a superior design to using decorators throughout an application. It is cleaner and provides more consistency. However, if this isn't the initial approach taken and there simply isn't enough time to design a resilient class hierarchy, the decorator approach is a good candidate to achieve loose-coupling.

Tuesday, September 22, 2009

Software Preservation

Grady Booch, over at the handbook of software architecture makes a good point of why preserving classic software systems for future generations is important. There is a storyline behind every system. Within this story lies an endless supply of rationale behind tricky technological problems. Of course, the rationale behind doing such and such with some software component would probably be worth something to some developer in the future. The how probably isn't as important, although it might be. Everything should be preserved as best as possible.

It might be difficult to say what might happen if the software of today isn't preserved for future generations. But what harm could be done if every meaningful software system were preserved for the future? There is probably a mountain of historical data that exists today that is of no particular use besides self-interest. At least it has no use yet. The only thing that is certain is that we'll never know if it proved to be worth-while if we don't do it today.

This got me thinking about software that isn't that old. Maybe a few years old. What if I as a developer worked on it but it was no longer of particular use to anyone else, would it be worth preserving? I think so. I've had countless times where I just thought of something I did to solve a similar problem in a older project. Trac really helps here. I just launched Trac and sure enough, I was able to find what I needed.

Friday, September 18, 2009

Sketching UML

The UML is a largely graphical modeling language used to communicate ideas in software design.  The communication channel may only be between the designer and himself.  It is often beneficial to build diagrams for yourself.  Even in doing so, you are still communicating the ideas.  Today, there exist countless UML diagramming tools in which each diagram is created on the computer screen using a mouse.  If enough effort is put into using these software modeling tools, the finished product that is the diagram often looks very visually appealing.  Perhaps too much so.  This can especially be the case if the diagram created is meant to serve as an aid to an initial idea that may or may not be implemented as illustrated in the diagram.

Since the UML is simply a modeling notation in addition to the underlying semantics, UML can also be sketched using a pencil and paper.  Using this medium for UML diagram creation can help to increase the creativity of the design.  It is also done so in a controlled way since a common notation, the UML is used.  Just because a common modeling notation is used in a sketch of some software system does not mean that it is a finished product.  Far from it.  All it means is that it is simply an idea that is being externalized and that there is still room for interpretation in the model.  This is exactly what is desired in the early stages of design.  Even if the implementation has already started.

The main benefit to doing UML sketches for diagrams is that several layers between the brain and the canvas are removed.  There is a certain mechanical appeal to putting pencil to paper and I think this helps with the design rather than hinder it.  The act of sketching is done largely by the software when drawing with UML modeling applications.  Sometimes imperfect lines and arcs add to the aesthetics of a design.

These sketches are obviously not ideal for a future reference to use once the system has moved further along in its life cycle.  For those types of diagrams, the various UML diagramming software is ideally suited.  The goods news is that transitioning a form of a sketch into a digital version isn't too difficult and is even easier when a standard such as the UML is used.

Thursday, September 17, 2009

Python Components

There are probably an endless number of definitions of what constitutes a Python component. The question I have is what is the correct definition or is there a correct definition for a Python component? It seems to me that some things lean more toward being the preferred form of a Python component while others build on this concept and others still are radically different than the vanilla component.

Of course, figuring out what a component is exactly might be a good start. Using the most general idea of what a component is and what a component is not would help us to translate these properties over to the Python world. I think in the most general sense, a component is any replaceable piece of any software system. So, if a component can be pulled out of some system and replaced with an identical component that can oblige to the original interfaces. If a new component cannot do this transparently without causing the system to fail, it isn't a component. It may be considered a component once it has this described property, but until then, it isn't.

Having describe what a component is at the most basic, generic level, how do we decompose Python systems in the same way? We want to take a piece of a given system written in Python, and replace it with another piece. Obviously it needs to conform to the required and provided interfaces to the slot it wishes to fill. But aside from that, what can physically be considered a Python component. At the most fundamental level, most developers would probably consider the module a valid candidate for a Python component. A module, in Python is basically how source code is organized. Well, it is in fact a source code file that supports the modularity concept, hence the name.

The egg is another candidate for a standard Python component. Eggs are the standard method in which to distribute Python pages. In fact, eggs are Python packages. They typically contain multiple Python modules. So are eggs just another type of Python component but at a higher level than modules are? That is tough to say because eggs can be treated as if they were Python modules once they have been deployed on a given system.

The most compelling feature of using eggs, besides the ease of installation, are the entry points feature. Entry points of Python eggs offer services to other eggs installed on the system. Eggs can advertise these services for free. There is no intervening necessary on the developers' behalf. The entry points provided by eggs are also a good candidate for what can be considered a Python component simply because of the enhanced feature set that they offer.

Wednesday, September 16, 2009

The Undesigned

In an interesting entry over at agile focus, we are given an idea of what the myth of the undesigned is all about. Well, in this context, it is all about undesigned software, of course. What this entry stresses is the fact that the act of software development is nothing but design and I would have to agree here. The main argument is that the philosophy of adding design to already-built software is fundamentally flawed. I would also agree here. This does however raise several questions as to what counts as already designed software (if you don't subscribe to the notion that all software is designed). For instance, in the context of implementation design, the actual code itself, it is very hard to add design to.

This, I think is what the author is stressing. An example of adding design to code might be cleaning up code that was previously sloppy. But is this really design that is being added to the code or simple rearrangement? I suppose some constraints must be imposed on this sort of cleaning up. This would simply be to ensure that code design doesn't take place while "cleaning up".

Not designed is indeed a bad design but designing nothing but code is also a bad design. Implementation is one thing but it is always best to keep the important, platform-independent, design out of the code and in a model of some form or another.

Tuesday, September 15, 2009

Themable UML

The unified modeling language, UML, is a modeling notation used for visualizing the design of software systems. Since it is used to visualize the system in question, the UML can be considered to be largely graphical by nature. But the UML specification only provides a base for the notation of each modeling element in addition to the underlying semantics of the language. What the specification doesn't say is anything about the overall look and feel of a finished diagram such as a class or a sequence diagram.

Most UML tools allow users to alter the color of certain aspects of certain model elements, like the fill color or the border color. This color value, for instance, can be set as the default for all new class elements that are placed in the diagram. Tools such as this become useful for emphasizing certain modeling elements in a particular diagram. Or to group certain elements. One may argue that the UML provides grouping elements already such as package elements. The package element is only a single dimension in the organization of a model.

A very useful feature of a UML modeling tool would be a theme selector. This would, of course, offer themable UML. But in the context of the UML, what exactly constitutes a theme? Would it just simply be the feature mentioned above that gives the modeler the ability to change the color of certain elements for emphasizing purposes? I would think not. A themable UML diagram would probably be more along the lines of a color scheme of the various UML modeling elements. In addition to the color scheme, subtle element shape variations could be offered by the theme. The idea behind the theme is that there is no need for the modeler to choose appropriate colors that work. The theme just makes the diagram look good.

This would be a good use case for implementing a UML profile. Since the profile can add visual distinctions to the elements in which stereotypes of the profile are applied, this fits the requirements.

With this feature enabled, some more advanced UML diagram output would be required. For instance, HTML output could be used while the various theme distinctions are defined in CSS. This way, a CSS theme framework, similar to that found in jQueryUI could be used.

Tuesday, September 1, 2009

Emphasizing Encapsulation

Encapsulation in software design refers to hiding the details of an implementation behind a set of provided interfaces. This definition isn't exclusive to object-oriented software design. Even traditional functions provide an interface while the implementation of that function is hidden. Object-oriented design simply places more emphasis on encapsulation. This is import when designing software simply because it imposes limits on what the client can do with the implementation. This isn't to say that the client invoking the code cannot misuse it. Chances are that a developer using even a well-written piece of code is going to misuse it at first. However, if the implementation is hidden, and the provided interfaces are well-designed and well-documented, than it doesn't take very long to figure out the intended use. Contrast this with code that isn't encapsulated. Even if well documented, the developer can never fully conceptualize the intent of the code.

The provided interfaces lie at the core of encapsulation because without them, all you have is an opaque piece of code that one cannot interact with. This obviously isn't desired and doesn't really exist in practice. Encapsulating messy code is a good thing initially. Emphasis should be placed on the quality of the provided interfaces. This doesn't exactly act as an excuse for writing terrible code. It merely states that if one thing needs to been done well and on time, it should be the provided interfaces. In placing the the quality requirements in the provided interfaces, this provides a nice sandbox for refactoring the encapsulated code.

Thursday, August 20, 2009

Platform Independence In Dynamic Languages

In today's modern computing world, more and more applications are built in dynamic languages. These languages offer more platform independence than compiled languages do. This is because dynamic languages run inside a virtual machine that was compiled for the target platform. The end result is applications that offer more platform independence. Dynamic language virtual machines are becoming more prevalent on end-user systems these days. For instance you'll find a Java virtual machine just about everywhere. In cases where the virtual machine does not exist on the target platform, the virtual machine can be distributed along with the application. The main benefit of course is that once a dynamic language virtual machine is installed, the machine handles the majority of the platform dependent operations.

There are obvious limits to how independent a given application written in a dynamic language can be. This mostly depends on what the application does. Applications that use only the most fundamental features of the language have any chance of being truly platform independent. Realistically, a given application is going to want to take advantage of libraries or modules offered by the language in order to provide better functionality. For instance, most dynamic languages will provide a operating system module for low-level system operations. Not all functionality of this module will be supported on all platforms and there may be subtle behavioral differences in the operations supported across all platforms. It is the responsibility of the application to handle these scenarios.

The best way to handle these platform dependent anomalies is to define platform specific abstractions. This abstraction is illustrated below.



The base Controller class is abstract. It should never be instantiated. Only one of its children, WinController or LinuxController, should ever be instantiated. Therefore, the base and its children should only define functionality that differs among supported platforms.