Pages

Saturday, December 14, 2013

Scientific Computing: Big Data Analytics


What is Big Data?
Big Data is collections of set of large and complex data. It is not just about collecting them, it also about storing and warehousing this data. Warehousing the data include functions like capture, storage, search, sharing, transfer, analysis and visualization.

Traditional database system:
In the history we have come access various types of database systems like the navigational database system, relational database system, SQL database and NoSQL database systems. All these database system have a common problem which is the limitation in storage. In this digital world we are generating huge amount of digital data in our day-to-day life knowingly or unknowingly. Therefore the limitation is storage has risen as a critical problem.

Why Big Data Analytics?


Big data solves this problem by accommodating all the data we generate. Big data supports lots of data-types. Both structured and unstructured data can be processed and handled in Big data.

  • Discovery: As it stores various types of data, it is difficult to get a solution by querying it. Therefore we need some automated mechanism to search the data for us.
  • Iteration: With huge set of data, it hard to find where to explore the data to get out results. So iterative approach is used.
  • Mining and Predicting: Mining data and predicting results have become serious business. There are lots of start-up companies using this as the core idea. For example, The Climate Corporation is a San Francisco based start-up company which provides insurance to the farmers based on huge records of climate and weather data.
  • Decision Management: Concluding or deciding a thing from very huge set of data using traditional database is practically impossible. Therefore Big data plays key role in any decision making task. 

Thursday, December 5, 2013

Computer Graphics: Autodesk's 3ds Max & Maya


3DS MAX:

It is the best 3D architecture modeling software. It is formerly known as 3D Studio Max. It is used by lot of game developers, animation movie makers and commercial advertisement makers.


Features:
It has all the features that make the architecture of any body we need create. It has lots of plug-ins which can be added depending upon the animators need. Some of the important features are MAXScript, Character Studio, Skinning and Texture assignment. MAXScript helps is creating scripts that automate the repetitive motions of the animation. Character Studio helps to animate virtual characters. Skinning is used to create the precise and exact skeletal formation. Texture assignment is used to design and provide texture to the animated body.

Modeling techniques:
  • Polygon modeling makes it possible to create polygons. Every polygon is created individually and all the characteristics are added to it.
  • NURBS: Non-uniform rational B-spline eliminates the rough edges in the polygon model. It makes the surface of the polygon smooth.
  • Surface tool: It is used to provide any type of surface to the polygon which is created

MAYA:

Autodesk’s Maya is the best software to create character models and animation. It is nowadays used in all the animated movies.


Components:
There are lots of components in Maya which provide different functionalities. Some of the important components are classic cloth, nHair, fur and fluid effects.

Classic cloth has all the cloth patterns which we use in real-world. nHair is used to create all the lengthy human hair. Fur is used to create hair on body which is usually used to created animal s hair. Fluid effects we can be used to create non-elastic fluid like smoke, fire, explosion and so on.

These software are proprietary software. There are lots of tutorials in YouTube to learn about them. Here is a link to learn 3ds Max and Maya.

Friday, November 29, 2013

Communications and Security: Email security


Email has become inevitable mode of communication in our day-to-day life. Even though there are many new technologies like chat, messenger and video chats are in the market, email still has its value. Email is preferred for both formal and informal communications. Email has its own weakness and it is prone to threats. Let us see some of the techniques we can use to have secure communication via email.


Create a Unique Password: Using same password for all the email accounts is not advisable. If one account is attacked eventually all the other accounts having same password will be attacked. In the worst case, we must have a unique password at least for the master account which has all you potential details.

Encrypt your data: Data sent through email must be encrypted. Double encryption can be done to protect sensitive data. This will help to prevent the loss of data.



Use digital certificates: Sign all your sites using digital certificates. Good practice is to save your digital certificates in routers or load balancers rather than web servers.

Spam filter on email servers: Use any spam filter to avoid spam. SpamAssassin is an Apache software which prevents unwanted email from entering your inbox.

Scan for Viruses & Malware: If something in your mail looks suspicious, run the malware and virus scanner. Virus does not affect your inbox every time but still it is safe to safe to run virus scanner every time. 

Saturday, November 23, 2013

Artificial Intelligence: Google Brain


Google Brain is an artificial intelligent machine which speaks to the user and answers the questions. It is a deep learning research project at Google. Andrew Ng, the Director of the Stanford Artificial Intelligence Lab founded the Google Brain project at Google, which developed very large scale artificial neural networks using Google's distributed compute infrastructure.


Google Brain is a fascinating project which uses set of algorithms from machine learning. It also uses artificial neural networks. Researchers try to train the Google Brain uses various sets of training data so that it can be used to enable high quality speech recognition, email spam blocking etc. Even the Google’s self-driving car concept is a part of the Google Brain project.

To teach the machine to distinguish between cars and motorcycles, the researchers need to collect tens of thousands of pictures that have already been labeled as ‘car’ or ‘motorcycle’ to train them. People working on this project say that they are create a neural network which simulates a new-born brain and introduce it to YouTube videos for a week. They inferred that it started to learn unlabeled images from the YouTube stills. This small-scale simulation of Google Brain learned what the word ‘cat’ is from the YouTube’s unlabeled stills. This is known as self-taught learning.

They are working on implementing this on all possible areas like speech recognition and email spam blocking. 

Friday, November 15, 2013

History of Computer Science: Who invented computer?



There is no easy answer for the question who invented the computer. Computer was not invented by one person, but many. It was not invented in a single break through but a series of incremental steps. It continues even today as new generations.


First automatic computing engine concept
Charles Babbage is known as the “father of computer”. He created the first Analytical Engine which contained the Arithmetic Logic Unit(ALU). It also had integrated memory and it is the first general purpose computing concept.



First concepts modern computer
Alan Turing proposed a concept of Turing machine to print symbols in 1936. It is considered as the first modern computer. The Turing machine has a tape of symbols which computes the logic of any computer algorithm.

First digital computer
ENIAC is the first electronic general-purpose computer. It has individual panels which perform different functions. The ENIAC was invented by J. Presper Eckert and John Mauchly at the University of Pennsylvania and began construction in 1943.



First commercial computer
In 1942, Konrad Zuse begin working on the Z4, which later became the first commercial computer after being sold to Eduard Stiefel a mathematician of the Swiss Federal Institute of Technology Zurich on July 12, 1950.


The first PC (IBM compatible) computer
The first personal computer was introduced by IBM in 1981. The computer is referred to as the Acorn and had an 8088 processor, 16 KB of memory, which was expandable to 256 and utilizing MS-DOS.

Thursday, November 14, 2013

File Sharing: Peer-to-peer


Peer-to-peer file sharing is different from traditional file downloading. In peer-to-peer sharing, you use a software program (rather than your Web browser) to locate computers that have the file you want. Because these are ordinary computers like yours, as opposed to servers, they are called peers.


The process works like this:
  • You run peer-to-peer file-sharing software on your computer and send out a request for the file you want to download.
  • To locate the file, the software queries other computers that are connected to the Internet and running the file-sharing software.
  • When the software finds a computer that has the file you want on its hard drive, the download begins.
  • Others using the file-sharing software can obtain files they want from your computer's hard drive.
The file-transfer load is distributed between the computers exchanging files, but file searches and transfers from your computer to others can cause bottlenecks. Some people download files and immediately disconnect without allowing others to obtain files from their system, which is called leeching. This limits the number of computers the software can search for the requested file.
Advantages:
  • Fault tolerance
  • Performance
  • Cost efficiency
Disadvantages:
  • Release of personal information, bundled spyware, and viruses downloaded from the network.
Some of the well known peer-to-peer file sharing programs are Bit Torrent, Shareaza, Limewire, eMule etc. Visit this site for more information regarding peer-to-peer file sharing programs.

Data Structures: Trees


A data structure is an arrangement of data in a computer’s memory (or sometimes on a disk). Data structures include arrays, linked lists, stacks, binary trees, and hash tables, among others.

Tree is an acyclic connected graph where each node has zero or more children nodes and at most one parent node. 

A tree is a non-empty set one element of which is designated the root of the tree while the remaining elements are subtree of the root.

Properties of Tree data structure

Depth: It is the length of the path from the root to that node. It is counted by the number of edges traversed.

Height: It is the longest path from that node to its leaves. The height of a tree is the height of the root.

Leaf node: It has no children. Its only path is up to its parent.

Types of trees

Binary: Each node has zero, one, or two children. This assertion makes many tree operations simple and efficient.

Binary Search: A binary tree where any left child node has a value less than its parent node and any right child node has a value greater than or equal to that of its parent node.

Traversal

Three different methods of traversal are possible for binary trees. They are 'preorder', 'postorder', and 'in-order'. They differ from each other by the order in which they visit the current node, left subtree and right subtree. 

Preorder: Current node, left subtree, right subtree 
Postorder: Left subtree, right subtree, current node 
In-order: Left subtree, current node, right subtree