S/W Theories




A / B Testing



The key to powerful growth hacking is to establish the discipline of A/B testing your efforts. An A/B test is really just testing one variation against the current option. You might A/B test an advertisement where you try a different call to action, say buy now versus buy today. Or you might try a new element to the user interface, in hopes of increasing the time spent on site. Every A/B test should have a clearly defined goal. What do you want to see happen, and what data do you need to measure to understand if the idea worked? With A/B testing, your ideas can be tested in real time.

Using some relatively simple tools, you can drive a fraction of your users to a different version of the page they were trying to access. There, you can measure their behavior and compare it to the existing solution. What's great about A/B testing is that you don't need to have equal groups of people. Instead, you need just enough volume that your results are statistically significant. This basically means the results you're seeing are unlikely to have happened by chance. To show you what I mean, let's look at an example of the results from a landing page test.

In Test A, we had 10,000 visitors and 50 conversions. This equals a .5% conversion rate. In Test B, we had 25,000 visitors and 150 conversions. This equals a .6% conversion rate. At a glance, you'd be inclined to call Test B a winner. For a marginal traffic increase, it looks like the conversions are higher, and the conversion rate is better. But we can use a statistical significance calculator to determine the truth behind this data. So here's a look at the output.

We're only 88% confident that Test B performs better than Test A. So we would need to let the test run longer, until we had enough data to determine it was really the winner. You're looking for a confidence value in the neighborhood of 95%. Now take a look at this same set of data, but this time we'll assume Test A had 80 conversions, and Test B still had 150. It's hard to tell at a glance. Test A has a higher conversion rate, but Test B has more conversions. When we plug the data into that calculator, we find that Test A converted 33% better than Test B, and our results have a 98% confidence rating.

Therefore, we would call Test A the winner. It's important to build A/B testing into your growth-hacking efforts. You can run A/B tests on your own, but if you need a little extra structure, take a look at the resources provided by Optimizely and Visual Website Optimizer. All A/B tests should start with a great hypothesis, and remember, a failed test isn't a bad thing. Learn from it and explore other alternatives.



    Data will Paint the Picture

    Confidence on result after longer run of Test will drive the decision




To be successful, having a high-level overview of your data isn't enough. You have to be willing to study the data at each point in the customer's journey. It's more than time on site and and total monthly visits. It's a quest to understand why they're doing what they're doing. The data will paint a picture, and it might not be obvious at first, but when you become extremely familiar with your data, it'll be a lot clearer. In fact, we should really start by determining how familiar you are with your data. Off the top of your head, can you indicate the most popular section of your website? How about the number one referral source for each of your user personas? What's the most common technology that your site is accessed with? Or on average, what's the most popular time for people to access your site? If you've got these nailed, that's great, but it doesn't mean you can stop digging.

Always push for more, because an important habit to form as a growth hacker is pursuing data relentlessly. Start every morning by looking over your metrics, familiarizing yourself with the information, exploring areas that are new and foreign, and trying to correlate things that might be seemingly unrelated. Look at everything from your social media metrics to your customer service email volumes. It all counts for something. And once you think you've learned enough, dig deeper. It's an incredibly valuable skill to hone and it separates good growth hackers from great growth hackers.

Over time, you'll learn where to spend your time, but don't worry about being terribly efficient as you learn. It's going to take a lot of practice and persistence. And if you're not naturally drawn to long spreadsheets full of numbers, it'll feel a little mundane. And I'm assuming you're running a lean operation here, which means you won't have access to a data sciences team or enterprise-level data mining software. This means you're the front lines for this information, all the more reason to spend your time researching. I'll share an example from Etsy to help solidify this topic.

I saw a presentation by Dan McKinley, a former engineer at Etsy.com. If you're unfamiliar with Etsy, they're a marketplace for handmade or vintage items. Members can build their own storefronts and sell products through their platform. Now they knew their users had a problem, the data said so. Users wanted more results per page, and they wanted them faster. So Etsy jumped in and spent five months developing infinite scrolling. The idea was that as you scrolled down the page, you'd get new results and never reach the bottom. They rolled it out and kept a close eye on the data.

They found that people were buying fewer things. The results were negative. They had to go back to how things were before, and that was an important learning experience. They heard the problem users were having, and they implemented a fix. If they didn't monitor their data diligently, they would've never realized the results were terrible. And they knew when they rolled back that they were still going to have the problem of not showing enough results, and it would be back to the drawing board as how to solve it again. Once you're really familiar with your data, start running tests. Try sending people through your funnel.

Give people unfamiliar with your site a specific goal. That might be to buy a specific product, or subscribe and cancel a free trial. Observe what happens. If you'd like to test this at scale, check out usertesting.com. Next, run the same test with your competitor or a product in a similar niche. You can even use usertesting.com for scale yet again. Finally, take the qualitative data and start mapping it to the quantitative. Do the issues outlined make sense based on what you're seeing? Do you have a problem that is larger than just one or two users complaining? If you notice that you have high abandonment after a user looks at a certain product, then you've got a starting point to analyze.

There's an endless amount of data you can interact with. The moral is to not trust your gut. Avoid leaning on instinct, and instead, let the data do the talking.


Handling failed experiments



It's inevitable. At some point an idea, a product feature, or even a growth strategy will fail. How you handle that failure is really important. It's really easy to sweep a failure under the rug, chock it up to series of bad decisions, and move on. And while moving on is great, if you don't take time to analyze the failure, you'll never truly learn from it. And as a growth hacker, learning from the data is a top priority. There's really no excuse not to learn. Data is so accessible now that every failure should get a thorough debrief.

Find the root of the problem even if you're working solo. You still need to walk through everything from start to finish. Not only do you learn from your mistakes but you'll also get better at recognizing the warning signs before failure happens. To get you thinking about how to approach failure, let's look at some common reasons things fail. One that I see often is that the data was misinterpreted. When analyzing a set of data, survey results, or even sales information, mistakes happen. Growth hacking is all about executing based on the data.

The data is the road map. And if you've got a bad set of numbers or an incorrect correlation, your map will send you in the wrong direction. This is often preventable by double-checking the data, reevaluating the collection technique, or checking for outliers that may have skewed everything. Another common point of failure is misunderstanding the customer. As marketers, we like to think we're pretty good at understanding our customer. And we probably do have a strong grasp of who they are and what they want. But it's really easy to introduce bias when we come up with ideas.

It's so easy to take something we want personally and project that need onto our customer. We'll make it something they need. And maybe it's something our customer actually asked for, but we didn't fully think about how it impacts everything collectively. More often than not, customers aren't really thinking about the big picture, either. They have a specific need, and it's your job to take a step back and explore how important it really is. Next, we have failure due to subpar execution. Execution is incredibly important.

The best ideas can only get so far. And if things break down in the execution, the idea will fail. Consider reviewing all the steps of how the project went. Where were the roadblocks? What in the execution failed? And how will you resolve that in the future? Failures in this area often identify areas that we need to grow either personally or as a team. Build into those weaknesses to prevent this hurdle from popping up in the future. Along with the theme of executing on a task is the issue related to time.

A lot of projects fail because there wasn't enough time. It's easy in hindsight to see the warning signs to biting off more than you or your team can chew, but when I map out ideas, I always double the time estimates provided to me. I'd rather be early than late. When deadlines are critical, be prepared to shift priorities and cut features to make it work. Another point of failure is the unwillingness to see things through. If you've done the research and you trust your data, don't pull the plug early. A lot of great projects get shut down just shy of their tipping point.

Stick to the plan. If you do fail, do it after the original time estimate has past. This way, you'll have a better understanding of what went wrong and you won't be left with the, "What, if ..." factor. And, finally, a huge contributing factor to failure is having the wrong attitude. It's okay to be skeptical about something. You can pursue an idea with positive skepticism. It's when you're convinced something is going to fail or an idea is terrible without fully vetting it out, that you run into a problem. If you're concerned an attitude will derail a concept then move to another idea until you've gathered enough support to try it out.

If you can take one thing away from all of this, it's that you're going to fail. So fail proudly, fail confidently, and fail often. And when you do fail, take time to understand why.


added on 03-Feb-2018


What is a Socket? Sockets allow communication between two different processes on the same or different machines. To be more precise, it's a way to talk to other computers using standard Unix file descriptors. In Unix, every I/O action is done by writing or reading a file descriptor. A file descriptor is just an integer associated with an open file and it can be a network connection, a text file, a terminal, or something else. To a programmer, a socket looks and behaves much like a low-level file descriptor. This is because commands such as read() and write() work with sockets in the same way they do with files and pipes. Sockets were first introduced in 2.1BSD and subsequently refined into their current form with 4.2BSD. The sockets feature is now available with most current UNIX system releases. Where is Socket Used? A Unix Socket is used in a client-server application framework. A server is a process that performs some functions on request from a client. Most of the application-level protocols like FTP, SMTP, and POP3 make use of sockets to establish connection between client and server and then for exchanging data. Socket Types There are four types of sockets available to the users. The first two are most commonly used and the last two are rarely used. Processes are presumed to communicate only between sockets of the same type but there is no restriction that prevents communication between sockets of different types. Stream Sockets − Delivery in a networked environment is guaranteed. If you send through the stream socket three items "A, B, C", they will arrive in the same order − "A, B, C". These sockets use TCP (Transmission Control Protocol) for data transmission. If delivery is impossible, the sender receives an error indicator. Data records do not have any boundaries. Datagram Sockets − Delivery in a networked environment is not guaranteed. They're connectionless because you don't need to have an open connection as in Stream Sockets − you build a packet with the destination information and send it out. They use UDP (User Datagram Protocol). Raw Sockets − These provide users access to the underlying communication protocols, which support socket abstractions. These sockets are normally datagram oriented, though their exact characteristics are dependent on the interface provided by the protocol. Raw sockets are not intended for the general user; they have been provided mainly for those interested in developing new communication protocols, or for gaining access to some of the more cryptic facilities of an existing protocol. Sequenced Packet Sockets − They are similar to a stream socket, with the exception that record boundaries are preserved. This interface is provided only as a part of the Network Systems (NS) socket abstraction, and is very important in most serious NS applications. Sequenced-packet sockets allow the user to manipulate the Sequence Packet Protocol (SPP) or Internet Datagram Protocol (IDP) headers on a packet or a group of packets, either by writing a prototype header along with whatever data is to be sent, or by specifying a default header to be used with all outgoing data, and allows the user to receive the headers on incoming packets. added on 08-Feb-2018
What is CORS? How does it work? Cross-origin resource sharing (CORS) is a mechanism that allows many resources (e.g., fonts, JavaScript, etc.) on a web page to be requested from another domain outside the domain from which the resource originated. It’s a mechanism supported in HTML5 that manages XMLHttpRequest access to a domain different. CORS adds new HTTP headers that provide access to permitted origin domains. For HTTP methods other than GET (or POST with certain MIME types), the specification mandates that browsers first use an HTTP OPTIONS request header to solicit a list of supported (and available) methods from the server. The actual request can then be submitted. Servers can also notify clients whether “credentials” (including Cookies and HTTP Authentication data) should be sent with requests. added on 08-Feb-2018
Explain the purpose of each of the HTTP request types when used with a RESTful web service. The purpose of each of the HTTP request types when used with a RESTful web service is as follows: GET: Retrieves data from the server (should only retrieve data and should have no other effect). POST: Sends data to the server for a new entity. It is often used when uploading a file or submitting a completed web form. PUT: Similar to POST, but used to replace an existing entity. PATCH: Similar to PUT, but used to update only certain fields within an existing entity. DELETE: Removes data from the server. TRACE: Provides a means to test what a machine along the network path receives when a request is made. As such, it simply returns what was sent. OPTIONS: Allows a client to request information about the request methods supported by a service. The relevant response header is Allow and it simply lists the supported methods. (It can also be used to request information about the request methods supported for the server where the service resides by using a * wildcard in the URI.) HEAD: Same as the GET method for a resource, but returns only the response headers (i.e., with no entity-body). CONNECT: Primarily used to establish a network connection to a resource (usually via some proxy that can be requested to forward an HTTP request as TCP and maintain the connection). Once established, the response sends a 200 status code and a “Connection Established” message. added on 08-Feb-2018
Explain the basic structure of a MIME multipart message when used to transfer different content type parts. Provide a simple example. A simple example of a MIME multipart message is as follows: MIME-Version: 1.0 Content-Type: multipart/mixed; boundary=frontier This is a message with multiple parts in MIME format. --frontier Content-Type: text/plain This is the body of the message. --frontier Content-Type: application/octet-stream Content-Transfer-Encoding: base64 PGh0bWw+CiAgPGhlYWQ+CiAgPC9oZWFkPgogIDxib2R5PgogICAgPHA+VGhpcyBpcyB0aGUg Ym9keSBvZiB0aGUgbWVzc2FnZS48L3A+CiAgPC9ib2R5Pgo8L2h0bWw+Cg== --frontier-- Each MIME message starts with a message header. This header contains information about the message content and boundary. In this case Content-Type: multipart/mixed; boundary=frontier means that message contains multiple parts where each part is of different content type and they are separated by --frontier as their boundary. Each part consists of its own content header (zero or more Content- header fields) and a body. Multipart content can be nested. The content-transfer-encoding of a multipart type must always be 7bit, 8bit, or binary to avoid the complications that would be posed by multiple levels of decoding. The multipart block as a whole does not have a charset; non-ASCII characters in the part headers are handled by the Encoded-Word system, and the part bodies can have charsets specified if appropriate for their content-type. MIME is an acronym for Multi-purpose Internet Mail Extensions. It is used as a standard way of classifying file types over the Internet. Web servers and browsers have a defined list of MIME types, which facilitates transfer of files of a known type, irrespective of operating system or browser. A MIME type actually has two parts: a type and a subtype that are separated by a slash (/). For example, the MIME type for Microsoft Word files is application/msword (i.e., type is application and the subtype is msword). added on 08-Feb-2018
Long Polling What is Long polling, how does it work, and why would you use it? Considering server and client resources, what is the main drawback of using long polling? Which HTML5 feature is the best alternative to long polling? The HTTP protocol is based on a request/response pattern, which means that the server cannot push any data to the client (i.e., the server can only provide data to the client in response to a client request). Long polling is a web application development pattern used to emulate pushing data from server to client. When the long polling pattern is used, the client submits a request to the server and the connection then remains active until the server is ready to send data to the client. The connection is closed only after data is sent back to the client or connection timeout occurs. The client then creates a new request when the connection is closed, thus restarting the loop. There are two important drawbacks that need to be considered when using long polling: Long polling requests are not different from any other HTTP request and web servers handle them the same way. This means that every long poll connection will reserve server resources, potentially maxing out the number of connections the server can handle. This can lead to HTTP connection timeouts. Each web browser will limit the maximum number of connections web application can make. This means that your application load time and performance may be degraded. In HTML5, a useful alternative to long polling is using a WebSocket. A WebSocket is a protocol providing full-duplex communications channels over a single TCP connection. The WebSocket protocol makes possible more interaction between a browser and a web site, facilitating live content and eliminates the need for the long polling paradigm. Another potential answer could be Server-sent DOM Events. Which is method of continuously sending data from a server to the browser, rather than repeatedly requesting it. However, this HTML5 feature is not supported by Microsoft Internet Explorer, thus making it less attractive solution. added on 08-Feb-2018
HTTP requests pool from Browser Consider the following JavaScript code that is executed in a browser: function startAjaxQueue(){ for (var i = 0; i < 50; i++){ executeAjaxCallAsync(); } }; Assuming that executeAjaxCallAsync() uses a standard XmlHttpRequest asynchronously to retrieve data from server, how many concurrent HTTP requests would you expect to be created by this loop? Number of concurrent HTTP requests and XmlHttpRequest is limited in all browsers. Specific limitations are different depending on browser type and version. For example, according to Mozilla Developer Network Firefox 3 limits the number of XMLHttpRequest connections per server to 6 (previous versions limit this to 2 per server). Having this mind, the number of concurrent HTTP requests created in this loop would never (by default) be larger than 6, and the browser would therefore execute this loop in chunks. added on 08-Feb-2018
What is an ETag and how does it work? An ETag is an opaque identifier assigned by a web server to a specific version of a resource found at an URL. If the resource content at that URL ever changes, a new and different ETag is assigned. In typical usage, when an URL is retrieved the web server will return the resource along with its corresponding ETag value, which is placed in an HTTP “ETag” field: ETag: "unique_id_of_resource_version" The client may then decide to cache the resource, along with its ETag. Later, if the client wants to retrieve the same URL again, it will send its previously saved copy of the ETag along with the request in a "If-None-Match" field. If-None-Match: "unique_id_of_resource_version" On this subsequent request, the server may now compare the client’s ETag with the ETag for the current version of the resource. If the ETag values match, meaning that the resource has not changed, then the server may send back a very short response with a HTTP 304 Not Modified status. The 304 status tells the client that its cached version is still good and that it should use that. However, if the ETag values do not match, meaning the resource has likely changed, then a full response including the resource’s content is returned, just as if ETag were not being used. In this case the client may decide to replace its previously cached version with the newly returned resource and the new ETag. added on 08-Feb-2018
Stateless and Stateful protocols Explain the difference between stateless and stateful protocols. Which type of protocol is HTTP? A stateless communications protocol treats each request as an independent transaction. It therefore does not require the server to retain any session, identity, or status information spanning multiple requests from the same source. Similarly, the requestor can not rely on any such information being retained by the responder. In contrast, a stateful communications protocol is one in which the responder maintains “state” information (session data, identity, status, etc.) across multiple requests from the same source. HTTP is a stateless protocol. HTTP does not require server to retain information or status about each user for the duration of multiple requests. Some web servers implement states using different methods (using cookies, custom headers, hidden form fields etc.). However, in the very core of every web application everything relies on HTTP which is still a stateless protocol that is based on simple request/response paradigm. added on 08-Feb-2018
Key advantages of HTTP/2 as compared with HTTP 1.1 HTTP/2 provides decreased latency to improve page load speed by supporting: Data compression of HTTP headers Server push technologies Loading of page elements in parallel over a single TCP connection Prioritization of requests An important operational benefit of HTTP/2 is that it avoids the head-of-line blocking problem in HTTP 1. added on 08-Feb-2018
What’s the difference between GET and POST? Both are methods used in HTTP requests. Generally it is said that GET is to download data and PUT is to upload data. But we can do both downloading as well as uploading either by GET/POST. GET: If we are sending parameters in a GET request to the server, then those parameters will be visible in the URL, because in GET, parameters are append to the URL. So there’s a lack of security while uploading to the server. We can only send a limited amount of data in a GET request, because the URL has its max limit and we can not append a long data string to the URL. POST: If we are using POST then we are sending parameters in the body section of a request. If we send data after using encryption in the body of an http request, it’s quite a bit more secure. We can send a lot more data using POST. Note: GET is faster in the case of just getting data using a static API call in cases where we don’t have to pass any parameters. added on 08-Feb-2018
BigoCheatSheet link bigocheatsheet.com added on 10-Feb-2018
Difference between dynamic programming and greedy approach? Based on Wikipedia's articles. Greedy Approach A greedy algorithm is an algorithm that follows the problem solving heuristic of making the locally optimal choice at each stage[1] with the hope of finding a global optimum. In many problems, a greedy strategy does not in general produce an optimal solution, but nonetheless a greedy heuristic may yield locally optimal solutions that approximate a global optimal solution in a reasonable time. We can make whatever choice seems best at the moment and then solve the subproblems that arise later. The choice made by a greedy algorithm may depend on choices made so far but not on future choices or all the solutions to the subproblem. It iteratively makes one greedy choice after another, reducing each given problem into a smaller one. Dynamic programming The idea behind dynamic programming is quite simple. In general, to solve a given problem, we need to solve different parts of the problem (subproblems), then combine the solutions of the subproblems to reach an overall solution. Often when using a more naive method, many of the subproblems are generated and solved many times. The dynamic programming approach seeks to solve each subproblem only once, thus reducing the number of computations: once the solution to a given subproblem has been computed, it is stored or "memo-ized": the next time the same solution is needed, it is simply looked up. This approach is especially useful when the number of repeating subproblems grows exponentially as a function of the size of the input. Difference Greedy choice property We can make whatever choice seems best at the moment and then solve the subproblems that arise later. The choice made by a greedy algorithm may depend on choices made so far but not on future choices or all the solutions to the subproblem. It iteratively makes one greedy choice after another, reducing each given problem into a smaller one. In other words, a greedy algorithm never reconsiders its choices. This is the main difference from dynamic programming, which is exhaustive and is guaranteed to find the solution. After every stage, dynamic programming makes decisions based on all the decisions made in the previous stage, and may reconsider the previous stage's algorithmic path to solution For example, let's say that you have to get from point A to point B as fast as possible, in a given city, during rush hour. A dynamic programming algorithm will look into the entire traffic report, looking into all possible combinations of roads you might take, and will only then tell you which way is the fastest. Of course, you might have to wait for a while until the algorithm finishes, and only then can you start driving. The path you will take will be the fastest one (assuming that nothing changed in the external environment). On the other hand, a greedy algorithm will start you driving immediately and will pick the road that looks the fastest at every intersection. As you can imagine, this strategy might not lead to the fastest arrival time, since you might take some "easy" streets and then find yourself hopelessly stuck in a traffic jam. Some other details... In mathematical optimization, greedy algorithms solve combinatorial problems having the properties of matroids. Dynamic programming is applicable to problems exhibiting the properties of overlapping subproblems and optimal substructure. added on 18-Feb-2018
Bit manipulation discover more problem about this added on 18-Feb-2018