Sunday, 29 July 2012

Client-Server Models and N-Tier Applications


Client-Server Models and N-Tier Applications


One of principal objective of Client-Server methods is to provide data to an end user. However, Client-Server architectural methodologies are much more complex. Client-Server describes the process wherein a client program generates contact with a separate server for a particular reason on a networked system. The client, in these cases, is the requester for a service that the server will theoretically provide.
In the course of the past two decades, we have witnessed the evolution of large scale, complex information systems. During this period, Client-Server computing models have come to be accepted as the preferred means of architecture for the design and deployment of applications.
Client-Server Models serve as the foundation of current enabling technologies such as workflow and groupware systems.
It is for certain that in the future, Client-Server technologies will have a major effect on technological transformations. They have already had a major effect on a recent transformation, in which network computing sawed in half monolithic applications that were based on MainFrames and split them in to two components – Client and Server.
In the past, Client-Server systems have been associated with a Desktop PC computer, which is connected via a network to an SQL database server of some sort. In actuality, however, the term “Client-Server ” refers to a logical model that serves to divide tasks in to two layers marked either “Client” or “Server”.
Within the Information Technology sector, a very simple form of Client-Server computing has been practicing since the inception of the MainFrame; a Single-Tier (One-Tier) System, which consists of a MainFrame host connected directly to a terminal.
In Two-Tier Client-Server architecture, however, the client is in direct communication with the database. Theapplication logic or business logic thus resides on one of two servers in the form of stored procedures.
The Client-Server Models initially emerged alongside the applications that were being developed for local area networks in the latter half of the ‘80s and the first half of the ‘90s. These models were mostly based on elementary file sharing techniques that had been implemented by X base style products, such as Paradox, FoxPro, Clipper, and dBase.

Fat Clients and Fat Servers

At first, the Two-Tier model required a non MainFrame as well as an intelligent fat client, which is where most of the processing would take place. This configuration was not very scalable and hence larger systems could not be accommodated. If fifty or more clients were connected, it would not function properly.
The GUI (Graphical User Interface), then came in to being as the most common desktop environment. Alongside the Graphical User Interface technology, a new form of Two-Tier architecture emerged. The LAN file server, used for general purposes, was replaced by a new, specialized database server. Thus, new development tools emerged; these included Visual Basic, Delphi, and PowerBuilder.
While a lot of the major processing operations still took place on the fat clients, datasets of info could now be delivered on to the client by utilizing Structured Query Language techniques as a means of performing requests from a database server. The server would then merely report the results of queries made.
The more complex the application becomes, the fatter the client subsequently gets. The client hardware thus must become increasingly powerful in order to be able to support it. As a result, the cost of adequate client technology can become rather prohibitive. It may in fact defeat the affordability of the application.
What is more, the footprint of the network utilizing the fat clients is incredibly large, think Bigfoot here, so there is an inevitable reduction of the network’s bandwidth as well as the number of users who can use the network in an effective manner.
Another approach often invoked in Two-Tier architecture is the thin client <-> fat server configuration. In this configuration, the user will invoke procedures that are stored at the database server. The fat server model gains performance in a more effective fashion, as the network footprint, while heavy, is still a lot lighter than the fat client method.
The negative side to this approach is that stored procedures focus on proprietary coding and customization because they rely on only one vendor’s procedure of functionality. What is more, as stored procedures tend to be buried deep in the database, every database containing a procedure has to be modified whenever there is a change made to the business logic. This can lead to major issues in the management arena, particularly when it comes to large distributed databases.
In either case, a remote database transport protocol (SQL-Net, for example) will be used to carry out the transaction. Such models require a heavy network process in order to provide mediation between the Client and Server. What is more, query transaction speed and network transaction size will both be reduced in light of the heaviness of such a transaction.
Regardless of which technique is employed, Client-Server systems were still not able to scale beyond a hundred users. This form of architecture tends not to be very well suited for mission critical applications.

Three-Tier Client-Server Architecture

In more recent times, a middle tier was added to Client-Server implementations, effectively creating a three tier structure. In a three tier or N-Tier atmosphere, the client implements the presentation logic. On application servers, the business logic shall be implemented. Data is left to be situated on to database servers.
The following three component layers define a Multi Tier (or N-Tier) architecture.
First of all, there is the front end component. This component provides portable presentation logic.
Secondly, there is a middle tier, which enables users to share business logic and control it by isolating it from the application in question.
Then there is the back end component. This component provides users with access to services like database servers.
Multi Tier architecture augments Two-Tier structures in the introduction of middle tier components. The Client system then works with the middle tier through such standard protocols as RPC and HTTP. The central tier interacts with the backend server through such standard database protocols as JDBC, SQL, and ODBC, among others.
The vast majority of the application logic is contained in the middle tier. It is here where client calls are translated in to database queries, and data from the database is simultaneously translated in to client data.
This positioning of business logic on the application server maximizes scalability as well as the business logic’s isolation, which effectively handles a business’s rapidly evolving requirements. More open choice of database vendors are then allowed for.
Three tier architecture can extend to N-Tiers.  In the event that the middle tier is capable of providing connections to a variety of different services, while also integrating them and coupling them to the client, as well as to each other.

N-Tier Architecture

As Client-Server Models evolved throughout the decade, many Multi Tier architecture models began to appear, enabling computers on the client side to function as both clients and servers. Once software developers began to realize that smaller processes were a lot simpler to design, not to mention cheaper and faster to implement, then N-Tier models increased in popularity quite rapidly. The same principles applied to the client side were then applied to the server side. As a result, thinner, more specialized server processes evolved.
These days, N-Tier architecture seems to dominate the industry. The vast majority of new IS development is being created in the form of N-Tier systems.
It should be noted, however, that N-Tier architecture does not necessarily preclude the utilization of Two-Tier or Three-Tier models. Depending on the scale and requirements involved for a particular data, Two-Tier or Three-Tier models are very often used in departmental applications.
N-Tier computing is widely considered to be the most effective model these days, as it promotes integration of contemporary information technology in the form of a much more flexible model. It is widely believed that the percentage of applications utilizing an N-Tier model is going to grow four fold within the next two years.
Three-Tier and N-Tier systems are mainly able to do two things that 2 Tier systems are unable to do. They are the partitioning of application processing loads among several different servers, and the funneling of database connections. By centralizing application logic within the central tier, business logic can be readily updated by a developer without having to re-deploy an application to thousands of different desktops.

Distributed Processing

N-Tier computing attains a high level of synergy by combining different computer models and providing centralized common services in a single distributed atmosphere.
The multi level distribution architecture in question must rely on a back end host of some kind, an intelligent client, as well as several intelligent agents in order to control activities like Online Transaction Processing, message handling, transaction monitoring, etc.
Such forms of architecture tend to rely heavily on object oriented methodologies, which help to effect a maximum amount of interchangeability and flexibility. 
TP monitors, distributed objects, and application partitioning tools can all contribute to spreading the processing load among a bevy of different machines, which supports an unlimited quantity of processing loads and users – quite a far cry from the Two-Tier architectural models of the past. Indeed, N-Tier is here to stay. At least for the foreseeable future.

Three Tier Software Architectures


Three Tier Software Architectures


In this tutorial, you will learn about three tire software architectures, purpose, history, technical details, three tier architecture usage considerations, maturity, costs and alternatives. 
The concept of Three Tier and multi tier architectures originated with the idea of Rational Software. Three Tier software is defined as client server architecture that feature the user interface, data storage, data access, and functional process logic maintained and developed as independent modules. Usually they are also located on different platforms. This architectural model is considered as both software design pattern and software architecture.
Besides the advantages that typically come with modular software with well defined interfaces, Three Tier systems are designed to allow any of its tiers to be upgraded or replaced without interfering with the other tiers or requiring a major change in technology. For instance, if one were to change their operating system to UNIX from Microsoft Windows, only the user interface code would be affected.
The user interface generally runs on a desktop computer, usually a PC, or a work station. It utilizes a normal graphical user interface. Its functional process logic might consist of one or several different modules running on a single application server or workstation. An RDBMS on a database server will contain the data storage logic. The center tier may itself be multi tiered – if this is the case, then the architecture is referred to as n-tier architecture.
Three Tier Architecture contains the following tiers or levels:
1. Presentation
2. Application/Logic/Business Logic/Transaction
3. Data
In the field of Web development, Three Tier is often employed in reference to web sites. In particular, Electronic commerce web sites are used in this system. This type of web site is usually built utilizing the following Three Tiers:
  1. A front end web server, which serves static content
  2. The middle level is typically an Application server. It might use, for example, a Java EE platform.
  3. A back end Database, which will contain both the Database management system and the Data sets. This manages the data and provides access to it.

Three Tier Software Architectures Purpose and History

Three Tier Architecture emerged in the last decade as a means of overcoming two tier architecture limitations. The third tier was added as a middle tier between the data management server and the user interface.  This middle tier is a provider of process management. This is where business rules and logic are typically executed. Several hundred users can be accommodated under a Three Tier model, whereas in a two tier model only about a hundred users could be accommodated. Three Tier Architecture accomplishes this through such convenient functions as application execution, queuing, and database staging.
Three Tier Architecture is typically employed when a distributed client server design is necessary that will provide an increase in performance, scalability, flexibility, reusability, and maintainability. At the same time, the complexity of the distributed processing is concealed from the end user. As a result of these optimizations, Three Tier Architectures have been found to be convenient models for Internet applications, as well as information systems that rely on the World Wide Web in some way.

Three Tier Architecture Technical Details

In the diagram below, we can see a model of Three Tier client server architecture. As you can see, it contains a user system on the top tier. This is where such user services as text input, session, display management, and dialog are located.
  • User System Interface
  • Process Management
  • Database management

 
The third tier contains database management functions. Its purpose is to optimize data and file services without having to result to the usage of proprietary database management system languages. This component makes sure that the data is consistent throughout the environment. In order to do so, it utilizes such features as data locking, replication, and consistency. The connectivity among tiers can be changed dynamically, but of course this depends on the user’s request for services and data.
The middle tier on the above model provides process management services which will be shared by multiple applications. These services may include process enactment, process resourcing, process development, and process monitoring. This tier also serves so as to improve performance. It is also called the application server. It improves scalability, reusability, flexibility, and maintainability via the centralization of process logic. This centralization makes change management and administration a lot simpler by localizing the functionality of the system so that changes only have to be written one time. They are then placed on the central tier and made available throughout the systems. With other architectural designs, it would be necessary to write the change in to each and every application.
The central process management tier also serves as a controller of asynchronous queuing and transactions. This thus ensures that transactions will be completed in a reliable fashion. The middle tier successfully manages to distribute database integrity through a commit process that occurs in two phases. Access to resources based on names, rather than locations, are provided. Thus, an improvement of flexibility and scalability results as the components of a system are either moved or added.

It sometimes happens that the central tier will be divided up in to several different units, each serving a different function. When this occurs, then the architecture will be referred to as multi layer. Many Internet applications operate in this fashion. When this occurs, the applications contain light clients that are written in HTML as well as application servers that are composed in either Java or C++. The gap between these two layers is too large for them to be linked together. So instead, an intermediate layer – or a web server – will then be implemented in to the scripting language. Requests from Internet clients are received on this layer, and HTML is subsequently generated utilizing the services situated on the business layer. The additional layer provides an additional level of isolation between the application logic and its layout.

Three Tier Architecture usage considerations

Three Tier Architectures tend to be employed in military and commercially distributed client server environments that necessitate the use of shared resources like processing rules and heterogeneous databases. Hundreds of users are supported by Three Tier Architecture, which makes it a lot more scalable than the two tiered architecture.
Since Three Tier Architecture systems help the development of software, as each tier is capable of being built and executed on different platforms, this make it a lot easier for the implementation to be organized. Three Tier Architectures also allow for several different tiers to be developed in a multitude of languages.
It is possible to migrate legacy system to Three Tier Architecture in a low risk and cost effective fashion. This is accomplished via the maintenance of the old database and process management rules, allowing both the new and old systems to be run side by side until each application and data object has been moved to the new design. Such a process of migration may well necessitate the rebuilding of legacy applications with new tools and buying additional service tools and server platforms. The benefit of such a move, however, is that Three Tier Architectures can hide the complexity of supporting and deploying network communications as well as underlying services.

Maturity, Costs and Alternatives

Throughout the early half of the ‘90s, Three Tier Architecture systems have been utilized successfully on thousands of systems. They were used by the Department of Defense, as well as in the business sector, where distributed information computing is necessary in a heterogeneous environment.
The construction of a Three Tier Architecture model can be quite a lot of work. The fact is, we are still not at the point where the programming tools that support the deployment and design of such architectures are able to provide all of the services that are required to support a distributed computing atmosphere.
One problem in the design of Three Tier Architecture systems is that it is not always clear that the process management logic, data logic, and interface logic are separate entities. Occasionally, process management logic can appear on all of the tiers. Thus, it is necessary to base placement of a particular function on a tier on the following criteria: the ease of testing and development; the scalability of the servers; the ease of administration; and the performance – this includes both network load and processing.
Sometimes, Three Tier Software Architecture will not be necessary. There are instances in which a two tier client server system will be okay. That situation is typically when the number of users will be less than one hundred or for the purpose of non real time info processing in a non complex system that does not require a huge amount of operation intervention.
Another viable alternative to Three Tier software architecture is distributed / collaborative enterprise computing. This alternative is deemed appropriated if object oriented technology on an enterprise wide scale is the end goal. Such enterprise wide designs typically consist of several smaller systems or subsystems.
While Three Tier Architecture is definitely sound, it has been found that the products supporting the implementation of the architecture are not always as well developed as technologies in competing fields. When it is not possible to fulfill one’s needs with the existing multi layer technology, Transaction Monitors have been recommended. While it is true that Transaction Monitors are not able to support such modern development paradigms as Objection Orientation, they can still be used when the desired end goal is robustness and scalability on a massive scale.
Three Tier technologies also have a lot of complimentary technologies, such as Object Oriented Design, which is used to implement decomposable applications. Other technologies include Database Two Phase Commit processing and Three Tier client server architecture tools. Useful middleware includes Message Oriented Middleware and Remote Procedure Call.

What is N-Tier?


What is N-Tier?


N-Tier applications are useful, in that they are able to readily implement Distributed Application Design and architecture concepts. These types of applications also provide strategic benefits to solutions at the enterprise level. It is true that two tier, client server applications may seem deceptively simple from the outset – they are easy to implement and easy to use for Rapid Prototyping. At the same time, these applications can be quite a pain to maintain and secure over time. 
N-Tier applications, on the other hand, are advantageous, particularly in the business environment, for a number of reasons.
N-Tier applications typically come loaded with the following components:
  • Security. N-Tier applications come with logging mechanisms, monitoring devices, as well as Appropriate Authentication, ensuring that the device and system is always secure.
    .
  • Availability + Scalability. N-Tier applications tend to be more reliable. They come loaded with fail over mechanisms like fail over clusters to ensure redundancy.
    .
  • Manageability. N-Tier applications are designed with the following capabilities in mind: deployment, monitoring, and troubleshooting. N-Tier devices ensure that one has the sufficient tools at one’s disposal in order to handle any errors that may occur, log those errors, and provide guidance towards correcting those errors.
    .
  • Maintenance. Maintenance in N-Tier applications is easy, as the applications adopt coding and deployment standards, as well as data abstraction, modular application design, and frameworks that enable reliable maintenance strategies.
    .
  • Data abstraction. N-Tier applications make it so that one can easily adjust the functionality without altering other applications.
Now that we understand the benefits of using an N-Tier application, let us explore the ways that one might go about building such an application.
First off, in order to build a successful N-Tier application, one must first have thorough knowledge of the business in question as well as the domain being used. One must also have sufficient technical and design expertise. In order to successfully distribute an application’s functionality throughout, appropriate “tiers” must be assigned.
As useful as they are, there are also situations when an N-Tier application might not be the most ideal solution for one’s business needs. Most of all, one should keep in mind that building an N-Tier application involves a lot of time, experience, skill, commitment, and maturity – not to mention the high cost. If one is insufficiently prepared in any of these areas, then building an N-Tier application might not be appropriate for you at this moment. Above all, building a successful N-Tier application necessitates a favorable cost benefit ratio from the outset.
First off, you should fully understand what an N-Tier application is, what it does, and how it functions. To put it the simplest terms possible, N-Tier applications help one distribute a system’s overall functionality in to a multitude of layers or “tiers.”
In a usual implementation, for example, you will most likely have at least some of the following layers, if not all: Presentation, Business Rules, Data Access, and Database. In some instances, it could be possible to split one or more of these different layers in to many different sub layers. It is possible to develop each of these layers separately from the others, as long as it can communicate with the other layers and adhere to the standards that have been set out in the specifications.
With an N-Tier application, it is possible for each layer to treat the other layers in a “black box” fashion. That means that the layers do not care how the other layers process information, as long as the data is sent between layers in the correct format.

What N-Tier does?

If you or someone you know works in the computer business, chances are great that you have heard the term “N-Tier” in recent weeks. Indeed, N-Tier is everywhere these days, and for good reason – it gives businesses who rely on computers a lot more freedom and capabilities than they had before.
To put it in simple terms, as outlined above, “N-Tier” means No Limit in Tiers – any number you want. N-Tier systems are useful in that they allow a business to utilize any combination of software and hardware resources that they want or need. What is more, it allows you to use that combination of resources to your advantage, and allows you to add on whatever components you may need, right on the spot.
N-Tier applications provide you with the capability to mix and match whatever forms of computer software and hardware layers you need in order for your business to function to its maximum capabilities. N-Tier applications aid businesses in providing a modular collection of Information Services – an aspect that all businesses in today’s competitive market need to stay afloat.
Any quantity of component based clients, interfaces, middleware, agents, and data servers may be arranged flexibly in to a multitude of different configurations – almost like software Lego!
When you partition your programs in to tiers, every component or layer can be independently developed, deployed, enhanced, and managed – all without outside interference.
These days, everyone in every sector is moving to an N-Tier network computing model. This means that N-Tier is not merely a passing trend – it is becoming the new business reality. The move to N-Tier computing systems has had a major impact on web based computer applications, as well as enterprise based applications.
N-Tier computing systems are also referred to as browser based systems, network centric systems, thin client computing, browser based computing, or multi tiered computing. In fact, N-Tier computing encapsulates all of these concepts and more. What N-Tier computing system provides is the ability to harness and work with the various complexities of computing systems in the modern world. The utilization of an N-Tier framework ensures a simplified and unified mix of otherwise chaotic applications, cross platform networks, as well as interfaces. In fact, there are so many benefits to using an N-Tier architecture that one article alone could not cover all of them!

N-Tier and Distributed Computing

As we have seen above, the term N-Tier can mean a lot of things to a lot of different people. Depending on your role in the business, N-Tier will apply to you in one or several different ways. Let us take a look at how N-Tier applies to the arena of distributed computing, first of all.
N-Tier applications are applications that one can readily divide in to a number of different logical layers or tiers, all through the use of a reusable, component based method. Such logical layers are able to operate in a number of different configurations and be used through several different physical systems. Thus, N-Tier models provide an unlimited amount of scalability and flexibility that should suit any business’s requirements.
Those seeking to integrate a vast collection of computational resources in to a single unified system should look no further than N-Tier. This model distributes computing systems in an effective manner, and can be created to enable the use of a variety of conflicting computing languages, platforms, as well as different operating systems.
N-Tier systems also provide the user with a flexible framework for a distributed computer environment. This ensures that users are able to take advantage of their resources and infrastructure while also being able to rest assured that they are fully prepared for whatever changes may arise in the future.
Most businesses in the past have relied on a client server computing model. What is so great about N-Tier is that it is based upon this familiar model, so as not to be too confusing. This model relies on Internet and Intranet related computer technology, which enables users to maximize their returns on investments as well as existing skill sets. At the same time, a reliable framework that is fully adaptable to growth and change is provided.
Finally, N-Tier computing systems provide users with a convenient method for centralizing their control over business information that is becoming increasingly critical in our technological era. At the same time, these structures allow for innovation within the departments, while also allowing for increased consumer input and a maximization of supplier input.

N-Tier as 21st Century Technology

A useful way of viewing N-Tier computer architecture is as the computational “unified field theory.” In this theory, everything can be potentially related to everything else. By using an N-Tier model, the existing client server system can be substantially improved and better maintained over time. It provides a multi layered environment that effectively simplifies the distribution of code, as the vast majority of business logistics have been moved from the client to the server in the N-Tier upgrade process.
Once an N-Tier model of computing is being employed, one has the capability of distributing independent components and / or services over as many tiers as you wish, and then linking them to one another in a dynamic fashion. The end result is that application flexibility becomes unlimited.
Remember – The letter N in N-Tier stands for any number of levels or tiers or layers, opportunities, clients or customers, advantages, objects, components, benefits, servers, services, abilities, configurations, and transactions. With N-Tier computing, the sky really is the limit!   

N-Tier as “The Way of the Future”

If you are looking for ways to maximize your business, then N-Tier computing is a smart solution. It provides a wide array of advantages, allowing for ready access to maintenance, upgrading, and heightened security. A lot of times, N-Tier applications can be acquired in packages that enable them to work with UDB, SQL Server, and Oracle, as well as other programs. Usually, big enterprise applications are designed as N-Tier applications. Many of them are also web based applications. Not only is this secure, it also makes it easy to use.

Application Development

Call by Value and Call by Reference


Call by Value and Call by Reference

In C programming language, variables can be referred differently depending on the context. For example, if you are writing a program for a low memory system, you may want to avoid copying larger sized types such as structs and arrays when passing them to functions. On the other hand, with data types like integers, there is no point in passing by reference when a pointer to an integer is the same size in memory as an integer itself.
Now, let us learn how variables can be passed in a C program.

Pass By Value

Passing a variable by value makes a copy of the variable before passing it onto a function. This means that if you try to modify the value inside a function, it will only have the modified value inside that function. One the function returns, the variable you passed it will have the same value it had before you passed it into the function.

Pass By Reference

There are two instances where a variable is passed by reference:
  1. When you modify the value of the passed variable locally and also the value of the variable in the calling function as well.
  2. To avoid making a copy of the variable for efficiency reasons.
Let's have a quick example that will illustrate both concepts.
xytotals.c:
Sample Code
  1. #include <stdio.h>
  2. #include <stdlib.h>
  3.  
  4. void printtotal(int total);
  5. void addxy(int x, int y, int total);
  6. void subxy(int x, int y, int *total);
  7.  
  8. void main() {
  9.  
  10.   int x, y, total;
  11.   x = 10;
  12.   y = 5;
  13.   total = 0;
  14.  
  15.   printtotal(total);
  16.   addxy(x, y, total);
  17.  
  18.   printtotal(total);
  19.  
  20.   subxy(x, y, &total);
  21.   printtotal(total);
  22. }
  23.  
  24. void printtotal(int total) {
  25.   printf("Total in Main: %dn", total);
  26. }
  27.  
  28. void addxy(int x, int y, int total) {
  29.   total = x + y;
  30.   printf("Total from inside addxy: %dn", total);
  31. }
  32.  
  33. void subxy(int x, int y, int *total) {
  34.   *total = x - y;
  35.   printf("Total from inside subxy: %dn", *total);
  36. }



Here is the output:
There are three functions in the above program. In the first two functions variable is passed by value, but in the third function, the variable `total` is passed to it by reference. This is identified by the “*” operator in its declaration.
The program prints the value of variable `total` from the main function before any operations have been performed. It has 0 in the output as expected.
Then we pass the variables by value the 2nd function addxy, it receives a copy of `x``y`, and `total`. The value of `total` from inside that function after adding x and y together is 15.
Notice that once addxy has exited and we print `total` again its value remains 0 even though we passed it from the main function. This is because when a variable is passed by value, the function works on a copy of that value. They were two different `total` variables in memory simultaneously. The one we set to 15 in the addxy function, was removed from memory once the addxy function finished executing.  However, the original variable `total` remains 0 when we print its value from main.
Now we subtract y from x using the subxy function. This time we pass variable `total` by reference (using the “address of” operator (&)) to the subxy function.
Note how we use the dereference operator “*” to get the value of total inside the function. This is necessary, because you want the value in variable `total`, not the address of variable `total` that was passed in.
Now we print the value of variable `total` from main again after the subxy function finishes. We get a value of 5, which matches the value of total printed from inside of subxy function.  The reason is because by passing a variabletotal by reference we did not make a copy of the variable `total` instead we passed the address in memory of the same variable `total` used in the function main. In other words we only had one total variable during entire code execution time.

Monday, 23 July 2012

How to Use China Mobile as a Modem to Surf Internet on PC


How to Use China Mobile as a Modem to Surf Internet on PC


You need mobile and data cable.
Make sure ur SIM1 holds the sim that is to be connected and also make sure that u get connection to ur phone…
1.Now all u need to do is download the Micromax Driver (it suites for all china mobiles) from Micromax : Nothing like Anything Click Here to Download
2.Download the driver(say Q3 Driver) and phone suite(say Q3 Phone suite)
extract it and save it on ur hard disk.
3.goto ADD MODEMS in your PC and browse for the (.inf) file that is in path\Q3_Phone_Suite\Q3 Phone Suite\modem_inf….Press OK.
4.Double click on application files in Driver folder (dont forget to install InstallDriver.exe)
5.Click on Phone suite Application and configure your COM Port using your USB data cable…
6.Create a connection (Phone Suite\Settings\Create Connection) using the Access point that depends on the network u use…
eg: vodafone= portalnmms
aircel= aircelgprs.pr
airtel= airtelgprs.com
reliance= rcomnet……………..
7.Goto Dialup tab to get connected using your data cable and enjoy surfing…

8.In order to use it wirelessly…..Connect ur phone to pc using bluetooth and right click on your device ….click Dial up Networking and create a connection and use the modem “Standard Modem over Bluetooth Link #(label)” and also the modem that is installed few mins b4
(i.e) “MTK GPRS Modem” with COM label that u have configured(eg. COM3)….
Its all done now and enjoy browsing on ur PC using ur GFIVE handset with or without wires…

Hack Websites Database Using Xpath Injection


Hack Websites Database Using Xpath Injection


Everyday many website gets hacked by hackers but most of the hackers are hacking those website just for popularity nothing else. Today i am writing this tutorial on XPath Injection, in which i will explain you, How Hackers Hack Website Using XPath Injection.
In a typical Web Application architecture, all data is stored on a Database server. This Database server store data in various formats like an LDAP, XML or RDBMS database. The application queries the server and accesses the information based on the user input.
Normally attackers try to extract more information than allowed by manipulating or using the query with specially crafted inputs.Here, in this tutorial we’ll be discussing XPATH Injection techniques to extract data from XML databases.


Before we go deeper into XPATH injection lets take a brief look at what XML and XPath.

What is XML?

XML stands for Extensible Markup Language and was designed or used to describe data. It provide platform for programmers to create their own customized tags to store data on database server. An XML document is mostly similar to an RDBMS Database except for the way data is stored in them. In case of a normal database, data is stored in a table rows and columns and in XML the data is stored in nodes in a tree form.

What is XPath?

XPath is a query language used to select data from XML data sources. It is increasingly common for web applications to use XML data files on the back-end, using XPath to perform queries much the same way SQL would be used against a relational database.
XPath injection, much like SQL injection, exists when a malicious user can insert arbitrary XPath code into form fields and URL query parameters in order to inject this code directly into the XPath query evaluation engine. Doing so would allow a malicious user to bypass authentication (if an XML-based authentication system is used) or to access restricted data from the XML data source.
Lets learn with the help of examples that will show how XPath works, Let’s assume that our database is represented by the following XML file: 

<?xml version=”1.0″ encoding=”ISO-8859-1″?> 
<users> 
<user> 
<username>wildhacker</username> 
<password>123</password> 
<account>admin</account> 
</user> 
<user> 
<username>cutler</username> 
<password>jay</password> 
<account>guest</account> 
</user> 
<user> 
<username>ronie</username> 
<password>coleman</password> 
<account>guest</account> 
</user> 
</users>

The above code show how username,password and user account details stored in XML file.
Following XPath query is used to returns the account whose username is “Ashtricks” and the password is “123″ : , 
string(//user[username/text()='gandalf' and password/text()='!c3']/account/text())

If the application developer does not properly filter user input, the tester or hacker will be easily able to inject XPath code and interfere with the query result. For instance, the hacker or tester could input the following values:
Username: ‘ or ’1′ = ’1 
Password: ‘ or ’1′ = ’1

Using these above parameters, the query becomes:
string(//user[username/text()='' or '1' = '1' and password/text()='' or '1' = '1']/account/text())

As in most of the common SQL Injection attack, we have created a query that always evaluates to true, which means that the application will authenticate the user even if a username or a password have not been provided.
And as in a common SQL Injection attack, with XPath injection, the first step is to insert a single quote (‘) in the field to be tested, introducing a syntax error in the query, and to check whether the application returns an error message.
If there is no knowledge about the XML data internal details and if the application does not provide useful error messages that help us reconstruct its internal logic, it is possible to perform a Blind XPath Injection attack(i will explain that in next tutorials), whose goal is to reconstruct the whole data structure. The technique is similar to inference based SQL Injection, as the approach is to inject code that creates a query that returns one bit of information.
This Information is for Educational Purpose. If you use this Information to harm any Substance or Community personally and got caught, Then we are not responsible. Hack to learn not learn to hack.