"For me, great algorithms are the poetry of computation.
Just like verse, they can be terse, allusive, dense, and even
mysterious. But once unlocked, they cast a brilliant new light
on some aspect of computing" - Francis Sullivan
My tour around the world of computer science, picking up little tidbits I learn along the way.
I'm interested in learning more about OpenFlow, and thought I'd go through the OpenFlow tutorial. Just to make things interesting I'm going to port it over from Beacon to Floodlight. First step for me is to remove the default forwarding behavior and just make the controller and switch act as a simple hub.
First we'll comment out the calls to initForwarding() and forwarding.startup() in Controller.java as well as references to the forwarding global variable:
protected void init() {
topology = new TopologyImpl();
deviceManager = new DeviceManagerImpl();
counterStore = new CounterStore();
pktinProcTime = new PktinProcessingTime();
routingEngine = new RoutingImpl();
flowCacheManager = new FlowCache();
initStorageSource();
topology.setFloodlightProvider(this);
topology.setStorageSource(storageSource);
deviceManager.setFloodlightProvider(this);
deviceManager.setStorageSource(storageSource);
deviceManager.setTopology(topology);
initMessageFilterManager();
initStaticFlowPusher();
//initForwarding();
// call this explicitly because it does setup
this.setStorageSource(storageSource);
HashSet<ITopologyAware> topologyAware = new HashSet<ITopologyAware>();
topologyAware.add(deviceManager);
topologyAware.add(routingEngine);
topology.setTopologyAware(topologyAware);
topology.setRoutingEngine(routingEngine);
HashSet<IDeviceManagerAware> dmAware =
new HashSet<IDeviceManagerAware>();
//dmAware.add(forwarding);
protected void startupComponents() {
// now, do our own init
try {
log.debug("Doing controller internal setup");
this.startUp();
} catch (IOException e) {
throw new RuntimeException(e);
}
log.debug("Starting topology service");
topology.startUp();
log.debug("Starting deviceManager service");
deviceManager.startUp();
// no need to do storageSource.startUp()
log.debug("Starting counterStore service");
counterStore.startUp();
log.debug("Starting routingEngine service");
routingEngine.startUp();
//log.debug("Starting forwarding service");
//forwarding.startUp();
protected void debugserver_start() {
Map<String, Object> locals = new HashMap<String, Object>();
locals.put("controller", this);
locals.put("deviceManager", this.deviceManager);
locals.put("topology", this.topology);
locals.put("routingEngine", this.routingEngine);
//locals.put("forwarding", this.forwarding);
Now we import the hub class:
import net.floodlightcontroller.hub.Hub;
Then create a hub global variable:
protected Hub hub;
In the init() method we create our hub instance and set the
floodlightprovider
hub = new Hub();
hub.setFloodlightProvider(this);
Then tell the hub to startup in the startupComponents() method
hub.startup();
After these changes I started up the controller and data gets forwarded out all ports just like a hub should!
I'm taking a class on cloud computing this semester and our first reading assignment is entitled "Cloudonomics". Here's a link to the paper: link The focus is on the economic viewpoint of cloud computing, starting with the definition provided by NIST, and providing economic views on each piece. The author translates the components into an acronym:
Common infrastructureHere are my notes:
Multiplexing demand over a common infrastructure can increase utilization, lowering the cost per delivered resource.
The coefficient of variation cv = std deviation / absolute mean, the lower the more smooth/flat
The smoother and flatter the demand, the better the utilization.
Multiplexing demand can help reduce cv, increasing utilization.
This is especially true if the demands offset, such as with the power grid having business usage peaked during the day and home usage peaked in the evening.
These economies are reached even at a mid-sized provider level, giving them economies similar to large providers.
Latency is an issue, especially with user interaction.
Latency is dependent on distance due to the limit of the speed of light in fiber and router hops.
As a private company tries to reduce that latency by increasing coverage, they will see diminishing returns on their investment.
Cost savings are more than on a per unit basis, but are on a utility pricing basis. You only have to pay for what is used.
Analogy of renting a car vs. buying it for a couple days use.
It gets interesting when using the cloud costs more than owning the resource, but demand is variable. There is a utility premium that must be accounted for. Intuitively if the demands are long term it may be cheaper to buy, if short term then may be cheaper to rent.
On-demand avoids excess and insufficient resources.
Really useful when demand is unpredictable and/or non-linear
Network costs allowing sharing must be factored in.
Users may be slow to adopt due to "loss aversion"
But lack of upfront costs may help speed adoption
Finding the optimal tradeoff between statistics of scale and user experience intractable.