Modeling software is an expensive activity. Building software models can be expensive for a multitude of reasons. The most obvious being the time lost for unsuccessful designs. Software models take time to put together. Unless you are sketching, the goal being to not spend a lot of time on them. Contrast this with a failed initial software implementation with no up-front modeling. There is a high probability of reusable software components that may be salvaged from the failure. Developers tend to become fixated on functional software. If something that at least somewhat functional comes out of a disaster, it will still help with morale.
So, if detailed modeling has no place in up-front software development, does it have any place in software development? Creating a detailed model of an existing software system may prove valuable. The key to modeling existing software systems is to pin-point exactly where the design has gone wrong. This is nearly impossible to spot during initial development. Especially under time constraints when the priority of design takes a back seat in favor of a functional system.
The obvious drawback to up-front modeling in software development is that promotes a waterfall approach. The waterfall approach doesn't work as well as an iterative and incremental approach, if at all. This doesn't mean you shouldn't be spending any time modeling, it means any up-front models created should be treated as informal at best. Perhaps it makes more sense to not even call them models because that suggests a level of formality we want to stay away from initially. Simple diagrams treated as sketches are a good practice to follow early on.
Formal models of software systems created after the system in question has been deployed in a real environment can prove valuable. The reason for this being that you have a functional system that is relatively stable. This is because after a system has been deployed for a while, hundreds if not thousands, of smaller bugs have been eliminated. These are the types of bugs that a software model isn't likely to solve in a timely manor. With the smaller issues mostly removed, we are free to tackle larger design issues that aren't easily solved with code.
One of the first things you might want to model are the dependencies between the packages and modules that make up the system. I've tried this before and was amazed at the problems I was able to see before even trying to model inheritance between classes. When you have several dependency lines crossing one another, it just looks bad. This is often a reflection of a sub-optimal design. The poorly painted picture provides quite the motivation to fix these issues. The same goes for class hierarchies. The relationships among the elements in the system, not just inheritance, are worth modeling. The value of modeling the details of each element isn't as high. Encapsulation also applies to software models to a certain degree.
Showing posts with label system. Show all posts
Showing posts with label system. Show all posts
Sunday, May 2, 2010
Tuesday, November 17, 2009
Twisted System Events
One of the key features of the Twisted Python web framework is the ability to define reactors that react to asynchronous events. One concept of the the Twisted reactor is the system event. The ReactorBase class is inherited from all reactor types in Twisted, as the name suggests. It is this class that provides all other reactors with a system event processing implementation.
An event, in the context of the Twisted reactor system, has three phases, or states. These states are "before", "during", and "after". What this provides for developers is a means to conceptually organize triggers that are executed when a specific event is fired. The "before" state should execute triggers that are supposed to verify certain data, or perform setup tasks. Anything that would be considered a pre-condition is executed here. The "during" state is overall goal of the event. Triggers that are executed in this state should do the heavy processing. Conceptually, this is the main reason a trigger was registered to execute with the specified event type in the first place. Finally, the "after" state executes triggers that should perform post-condition testing, or clean-up type tasks.
Illustrated below are the various states that a Twisted system event will go through during its' lifetime. The transitions between states are quite straightforward. When there are no more triggers to execute for the current state, the next state is entered.

Event triggers are registered with specific event types by invoking the ReactorBase.addSystemEventTrigger() method. This method accepts an event state, callable, and event type parameters. The callable can be any callable Python object.
The type of event in which triggers can be registered to can be anything. The event type is only the key for a stored event instance. The _ThreePhaseEvent class is instantiated if not already part on the reactor. That is, if a trigger has already been registered for the same event type, that means an event instance has been created. The _ThreePhaseEvent instance for each event type is responsible for executing all event triggers in the correct order. Using the Twisted system event functionality means that dependencies between event states may be used to achieve desired functionality.
An event, in the context of the Twisted reactor system, has three phases, or states. These states are "before", "during", and "after". What this provides for developers is a means to conceptually organize triggers that are executed when a specific event is fired. The "before" state should execute triggers that are supposed to verify certain data, or perform setup tasks. Anything that would be considered a pre-condition is executed here. The "during" state is overall goal of the event. Triggers that are executed in this state should do the heavy processing. Conceptually, this is the main reason a trigger was registered to execute with the specified event type in the first place. Finally, the "after" state executes triggers that should perform post-condition testing, or clean-up type tasks.
Illustrated below are the various states that a Twisted system event will go through during its' lifetime. The transitions between states are quite straightforward. When there are no more triggers to execute for the current state, the next state is entered.

Event triggers are registered with specific event types by invoking the ReactorBase.addSystemEventTrigger() method. This method accepts an event state, callable, and event type parameters. The callable can be any callable Python object.
The type of event in which triggers can be registered to can be anything. The event type is only the key for a stored event instance. The _ThreePhaseEvent class is instantiated if not already part on the reactor. That is, if a trigger has already been registered for the same event type, that means an event instance has been created. The _ThreePhaseEvent instance for each event type is responsible for executing all event triggers in the correct order. Using the Twisted system event functionality means that dependencies between event states may be used to achieve desired functionality.
Wednesday, June 10, 2009
Sending Django Dispatch Signals
In any given software system, there exist events that take place. Without events, the system would in fact not be a system at all. Instead, we would have nothing more than a schema. In addition to events taking place, there are often, but not always, responses to those events. Events can be thought of abstractly or modeled explicitly. For instance, the method invocation "obj.do_something()" could be considered an invocation event or a "do something" event. This would be an abstract way of thinking about events in an object oriented system. Developers may not even think of a method invocation as an event taking place. However, the abstraction is there if needed. A method invocation is an event when it needs to be because it has a location in both space and time. Events can also be modeled explicitly in code. This is the case when designing a system that employs a publish-subscribe event system. Events are explicitly published while the responses to events can subscribe to them. Another form of event terminology that is often used is to replace event with signal. This is the terminology used by the Django Python web application framework dispatching system.
Django defines a single Signal base class in dispatcher.py and is a crucial part of the dispatching system. The responsibility of the Signal class is to serve as a base class for all signal types that may dispatched in the system. In the Django signal dispatching system, signal instances are dispatched to receivers. Signal instances can't just spontaneously decide to send themselves. There has to be some motivating party and in the Django signal dispatching system, this concept is referred to as the sender. Thus, the three core concepts of the Django signal dispatching system are signal, sender, and receiver. The relationship between the three concepts is illustrated below.

Senders of signals may dispatch a signal to zero or more receivers. The only way that zero receivers receive a given signal is if zero receivers have been connected to that signal. Additionally, receivers, once connected to a given signal, have the option of only accepting signals from a specific sender.
So how does one wire the required connections between these signal concepts in the Django signal dispatching system? Receivers can connect to specific signal types by invoking the Signal.connect() method on the desired signal instance. The receiver that is being connected to the signal is passed to this method as a parameter. If this receiver is to only accept these signals from specific senders, the sender can also be specified as an parameter to this method. Once connected, the receiver will be activated once any of these signal types have been sent by a sender. A sender can send a signal by invoking the Signal.send() method. The sender itself is passed as a parameter to this method. This is a required parameter even though the receiver may not necessarily care who sent the signal. However, it is good practice to not take these chances. If, from a signal sending point of view, there is always a consistency in regards to who the sender is, there is a new lever of flexibility on the receiving end. Illustrated below is a sample interaction between a sender and a receiver using the Django signal dispatching system to send a signal.

The fact that the signal instances themselves are responsible for connecting receivers to signals as well as the actual sending of the signals may seem counter-intuitive at first. Especially if one is used to working with publish-subscribe style event systems. In these event systems, the publishing and subscribing mechanisms are independent from the publisher and subscriber entities. However, in the end, the same effect is achieved.
Django defines a single Signal base class in dispatcher.py and is a crucial part of the dispatching system. The responsibility of the Signal class is to serve as a base class for all signal types that may dispatched in the system. In the Django signal dispatching system, signal instances are dispatched to receivers. Signal instances can't just spontaneously decide to send themselves. There has to be some motivating party and in the Django signal dispatching system, this concept is referred to as the sender. Thus, the three core concepts of the Django signal dispatching system are signal, sender, and receiver. The relationship between the three concepts is illustrated below.

Senders of signals may dispatch a signal to zero or more receivers. The only way that zero receivers receive a given signal is if zero receivers have been connected to that signal. Additionally, receivers, once connected to a given signal, have the option of only accepting signals from a specific sender.
So how does one wire the required connections between these signal concepts in the Django signal dispatching system? Receivers can connect to specific signal types by invoking the Signal.connect() method on the desired signal instance. The receiver that is being connected to the signal is passed to this method as a parameter. If this receiver is to only accept these signals from specific senders, the sender can also be specified as an parameter to this method. Once connected, the receiver will be activated once any of these signal types have been sent by a sender. A sender can send a signal by invoking the Signal.send() method. The sender itself is passed as a parameter to this method. This is a required parameter even though the receiver may not necessarily care who sent the signal. However, it is good practice to not take these chances. If, from a signal sending point of view, there is always a consistency in regards to who the sender is, there is a new lever of flexibility on the receiving end. Illustrated below is a sample interaction between a sender and a receiver using the Django signal dispatching system to send a signal.

The fact that the signal instances themselves are responsible for connecting receivers to signals as well as the actual sending of the signals may seem counter-intuitive at first. Especially if one is used to working with publish-subscribe style event systems. In these event systems, the publishing and subscribing mechanisms are independent from the publisher and subscriber entities. However, in the end, the same effect is achieved.
Labels:
dispatch
,
django
,
event
,
python
,
signal
,
system
,
webapplication
,
webframework
Wednesday, February 18, 2009
Python memory Usage
Here is an example in Python of how to retrieve the system memory usage. This example was adapted from an entry on stackoverflow.
Here we have a simple class called MemUsage. The constructor initializes the attributes of the class needed to compute the memory usage. The init_data() method is what MemUsage invokes in order to retrieve the required system data. This is done by using the subprocess module to execute the free command. The resulting data is then mapped to the corresponding attributes. We compute the memory usage as a percentage by subtracting the buffers and cache from the used memory and dividing the result by the total memory.
#Example; Get the system memory usage.
import subprocess
class MemUsage(object):
def __init__(self):
self.total=0
self.used=0
self.free=0
self.shared=0
self.buffers=0
self.cached=0
self.init_data()
def init_data(self):
command="free"
process=subprocess.Popen(command,\
shell=True,\
stdout=subprocess.PIPE)
stdout_list=process.communicate()[0].split('\n')
for line in stdout_list:
data=line.split()
try:
print data
if data[0]=="Mem:":
self.total=float(data[1])
self.used=float(data[2])
self.free=float(data[3])
self.shared=float(data[4])
self.buffers=float(data[5])
self.cached=float(data[6])
except IndexError:
continue
def calculate(self):
return ((self.used-self.buffers-self.cached)/self.total)*100
def __repr__(self):
return str(self.calculate())
if __name__=="__main__":
print MemUsage()
Subscribe to:
Posts
(
Atom
)