It is always great to see someone who is really smart and knows what they are talking about...also kind of makes you wonder if your E-Trade account is up to the task and even if it is, whether it might be just like picking up pennies in front of an oncoming steamroller...click to view
Techaisle Blog
Before you attempt any modelling you should first look at the inputs and outputs that you want to go in to your modelling. Here is the matrix:
What you need to do is to make a laundry list of the variables (inputs) that affect the output. Typically in a marketing company one would look at sales as the output and a whole lot of variables as inputs. Let me look at a few examples for these cells.
1. Measurable-Controllable Variables
GRPs of your brand through TV advertising are measurable and controllable.
2. Measurable-Not-Controllable
Inflation is measurable but not controllable
3. Not-measurable – Not Controllable
The amount of investments made by your competition in dealer incentives is neither easy to measure accurately nor can you have any control. But this activity impacts the sales of your brand.
4. Not Measurable-Controllable
Not measurable generally refers to qualitative issues which are quite often measured by a pseudo variable, for example: Quality of your salesperson.
In your business environment if the majority of your input variables are in Cells 1 and 2, and you feel that these make a big impact, then modelling will be successful. If not, and many variables are in Cells 3 and 4, modelling will not be a success.
Most companies do not undertake this simple preliminary exercise of classifying the variables that impact their business and then hit potholes throughout the design testing and implementation.
Unclassified variables are veritable landmines. Watch out for them.
Dr. Cooram Ramacharlu Sridhar (Doc)
Techaisle
Predictive Analytics (PA) is emerging as an important tool in the area of business decision. Predictive Analytics primarily deals with making a forecast based on several inputs. In this and the blogs that follow I will share my experiences with Predictive Modelling (PM), with a view to contributing to the current knowledge base that exists in the Predictive Analytics World.
In the world of business most predictive analytical tools are quantitative where numeric data is used for building an input-output model. The output is the prediction for specific inputs. For example: A 10% increase in advertising in January will result in an increase of 1% sale in May is a typical output from predictive analytics.
Common Mistake of Predictive Modelers: Assumption of linearity
Predictive Models are largely based on statistical techniques. Multiple Linear Regression (MLR) model is what most users will confront when they look at predictive models. This model works in the background whether one is using a multiple time series or multi-level modelling.
Multiple Linear Regression Models are developed based on a crucial assumption: the output is linearly dependent on the inputs. But all experience shows that in most business situations the assumption of linearity is not valid. Hence the statistical models have a poor fit and low predictive capability. In addition, the business world also suffers from Black Swan problems that no modelling can manage with any level of confidence.
The net effect of a linearity assumption, which is ubiquitous in almost all statistical modelling, and the resultant poor fit and low predictive capability has led to frustrated user community. Hence, a business executive looks at models with suspicion and trusts ‘gut’ to make decisions.
Predicament
The predicament of Predictive Modellers’ is: How do we get away from the linearity assumption on which almost all statistical tools are based, but it is known that this assumption is a poor, in fact a very poor, approximation of the real world behaviour?
The story of our approach to modelling starts from this predicament that we have been in, along with all others, and the path that we cut out to get out of it.
Dr. Cooram Ramacharlu Sridhar (Doc)
Managing Director and Advisor, Segmentation & Predictive Modeling
- 6-72 TB of NAND Flash capacity
- Up to 650,000 IPOS
- Upto 7 GB/Sec bandwidth
- Asynchronous replication
- VMWare and Citrix ready
The products are completely scalable. A mid-market customer can begin with Accela and can add Invicta through InfiniBand as the needs grow. Even within the Invicta chassis, a toup to 6 storage nodes with 6 to 12TB and one router can be added as lego blocks as the data needs evolve.
Analyst Speak
Whiptail’s announcement comes at a time when the buzz about big data has reached a crescendo. And along with big data, vendors and analysts have started to talk about data obesity and therefore need for storage capacity. Granted that storage capacity needs are multiplying but big data poses a bigger challenge – extremely high throughput and read-to-write performance. Traditional storage vendors have tried to make higher-performing storage either by using as many spindles or constricting drives. None of them technically really address the velocity problem – real time streams of high volume information that is both structured and unstructured. Whiptail is taking the conversation away from storage-capacity play to velocity play thereby reducing the cost of transactions.
Even the channel partners wanting to develop or expand their datacenters and offer cloud-based services can use Invicta because of its multi-tennant, multiple addminstrators, and role-based security capabilities.
Invicta is an application acceleration platform that big data purveyors will love to the bane of other other storage vendors.
Anurag Agrawal
Techaisle