Friday, January 10, 2014

Gmail changes settings, now lets you mail people of Google+


You can now mail someone whose email address you don't have. It is as simple as that. It also means you will get mail from people not in your address book.
Google on Friday announced a change in its policies which allows you to mail people you know but don't know the email address of. "Starting this week, when you're composing a new email, Gmail will suggest your Google+ connections as recipients, even if you haven't exchanged email addresses yet," said a post from Google.
However, a mail send like this will land in your Social category and not the main inbox.
"Emailing Google+ connections works a bit differently to protect the privacy of email addresses," says the post, explaining how privacy will be maintained. "Your email address isn't visible to your Google+ connections until you send them an email, and their email addresses are not visible to you until they respond," it adds.
The mail specifies that "only after you respond or add them to your circles, can they start another conversation with you".
However, users can control whether people can reach you with a new setting in Gmail on the desktop. It lets you decide who gets to mails you -- everyone on Google+, extended circles, circles or no one.

However, a mail send like this will land in your Social category and not the main inbox.


Wednesday, July 24, 2013

DIFFERENCE BETWEEN INTERNET AND INTRANET

Consider these aspects and how they are very different between an Internet presence and
a corporate intranet.




Aspect Internet Intranet
Business Goals Communication, support marketing, sell products. Broad goals including but not limited to communication of information accurately while improving staff efficiency.
Audience External users with a limited understanding of the organization. Internal employees with a good understanding of the organization.
Efficiency Pages must display in reasonable time frame, however, messages and graphics may purposely be introduced to make a point even though the pages are delayed in being presented. The primary goal of the site is to improve staff efficiency. Unnecessary content deters from productivity.
Size and Content Small to medium with minimal changes to the content, possibly weekly or monthly on an actively changing site. Medium to Massive with content changing hourly.
Content Narrow, centered around key products and services. Broad, varied types and content focused on the tools needed to perform their job.
Presentation Appearance or Pizzazz is very
important.
Consistency is more important than appearance.
Authors Often centralized. Often decentralized.

Friday, April 19, 2013

CLOUD COMPUTING



Cloud computing is the use of computing resources (hardware and software) that are delivered as a service over a network (typically the Internet). The name comes from the common use of a cloud-shaped symbol as an abstraction for the complex infrastructure it contains in system diagrams. Cloud computing entrusts remote services with a user's data, software and computation.
End users access cloud-based applications through a web browser or a light-weight desktop or mobile app while the business software and user's data are stored on servers at a remote location. Proponents claim that cloud computing allows companies to avoid upfront infrastructure costs, and focus on projects that differentiate their businesses instead of infrastructure.[1] Proponents also claim that cloud computing allows enterprises to get their applications up and running faster, with improved manageability and less maintenance, and enables IT to more rapidly adjust resources to meet fluctuating and unpredictable business demand.[1][2][3]
In the business model using software as a service (SaaS), users are provided access to application software and databases. Cloud providers manage the infrastructure and platforms that run the applications. SaaS is sometimes referred to as "on-demand software" and is usually priced on a pay-per-use basis. SaaS providers generally price applications using a subscription fee.
Proponents claim that the SaaS allows a business the potential to reduce IT operational costs by outsourcing hardware and software maintenance and support to the cloud provider. This enables the business to reallocate IT operations costs away from hardware/software spending and personnel expenses, towards meeting other IT goals. In addition, with applications hosted centrally, updates can be released without the need for users to install new software. One drawback of SaaS is that the users' data are stored on the cloud provider's server. As a result, there could be unauthorized access to the data.



Saturday, April 6, 2013

CAPTCHA









Web definitions
A CAPTCHA or Captcha is a type of challenge-response test used in computing to ensure that the response is not generated by a computer....

CAPTCHA (pron.: /ˈkæp.ə/) is a type of challenge-response test used in computing as an attempt to ensure that the response is generated by a human being. The process usually involves a computer asking a user to complete a simple test which the computer is able to grade. These tests are designed to be easy for a computer to generate but difficult for a computer to solve, but again easy for a human. If a correct solution is received, it can be presumed to have been entered by a human. A common type of CAPTCHA requires the user to type letters and/or digits from a distorted image that appears on the screen. Such tests are commonly used to prevent unwanted internet bots from accessing websites, since a normal human can easily read a CAPTCHA, while the bot cannot process the image letters and therefore, cannot answer properly, or at all.
Although most CAPTCHAs are letter pictures randomly generated, many of them have become difficult even for a human to read it, so picture CAPTCHAs were created in which a human is shown a simple test to show a picture of a certain animal (given few animal pictures), which is simple for a human being to process, and therefore easy to pick, while a bot cannot process and solve the question because although it can analyze the picture, it cannot easily guess the animal.
The term "CAPTCHA" was coined in 2000 by Luis von AhnManuel BlumNicholas J. Hopper, and John Langford (all ofCarnegie Mellon University). It is an acronym based on the word "capture" and standing for "Completely Automated PublicTuring test to tell Computers and Humans Apart". Carnegie Mellon University attempted to trademark the term,[1] but the trademark application was abandoned on 21 April 2008.[2]
A CAPTCHA is sometimes described as a reverse Turing test, because it is administered by a machine and targeted at a human, in contrast to the standard Turing test that is typically administered by a human and targeted at a machine.

Applications

CAPTCHAs are used in attempts to prevent automated software from performing actions which degrade the quality of service of a given system, whether due to abuse or resource expenditure. CAPTCHAs can be deployed to protect systems vulnerable to e-mail spam, such as the webmail services of GmailHotmail, and Yahoo! Mail.
Most interactive sites today are run by databases and become quickly clogged and sluggish when a database table exceeds capabilities the operating server can handle.[3] A website's Google PageRank can also be reduced by excessive commercial links created by automated posting.
CAPTCHAs are also used to minimize automated posting to blogsforums and wikis, whether as a result of commercial promotion, or harassment and vandalism. CAPTCHAs also serve an important function in rate limiting. Automated usage of a service might be desirable until such usage is done to excess and to the detriment of human users. In such cases, administrators can use CAPTCHA to enforce automated usage policies based on given thresholds. The article rating systems used by many news web sites are another example of an online facility vulnerable to manipulation by automated software.[4]

[edit]Accessibility

Because CAPTCHAs rely on visual perception, users unable to view a CAPTCHA due to a disability will be unable to perform the task protected by a CAPTCHA. The groups who commonly struggle with visual CAPTCHAs are:
Sites implementing CAPTCHAs may provide an audio version of the CAPTCHA in addition to the visual method. The official CAPTCHA site recommends providing an audio CAPTCHA for accessibility reasons, but it is still not usable for deafblind people or for users of some text-based web browsers.
Due to the sound distortion present in audio CAPTCHAs and visual distortion present in visual CAPTCHAs, offering one as an alternative to the other does not help people with impairments in both areas. While deafblind is a small group, having some degree of impairment in both areas is actually common, and very common amongst older people.

[edit]Attempts at more accessible CAPTCHAs

Even audio and visual CAPTCHAs will require manual intervention for some users, such as those who have disabilities. There have been various attempts at creating more accessible CAPTCHAs, including the use of JavaScript, mathematical questions ("how much is 1+1") and common knowledge questions ("what color is the sky on a clear day"). However, these approaches may worsen accessibility for people with intellectual and developmental disabilities, for instance dyscalculia. Some CAPTCHAs of this kind do not meet the criteria for a successful CAPTCHA because they are not automatically generated or do not present a new problem or test for each attack.
One approach to text-based CAPTCHAs is to create a central "anti-bot server", used by many websites, which selects for each call one puzzle, randomly, from a very large set of many different automatically generated puzzles, of many different kinds. Such a solution can be made usable for blind and visually impaired people who otherwise find prevalent image-based CAPTCHAs to be insurmountable obstacles to completing web forms.
For a more complete solution to the CAPTCHA accessibility problem all four types of impairment that affect web use - motor, visual, cognitive and hearing - would need to be catered for. Combining the different approaches, i.e., image, audio and puzzle, would open up access to many more people, however there has not yet been an attempt to achieve this.

[edit]Advertising

Since 2009, CAPTCHA advertising has become much more prevalent. Publishers like AOLMeredith Corporation,[5] and Internet Brands [6] have adopted the option as an additional revenue stream. Users typically type in brand messages instead of distorted text.[7]

[edit]Circumvention

There are several approaches available to defeating CAPTCHAs:
  • exploiting bugs in the implementation that allow the attacker to completely bypass the CAPTCHA
  • improving character recognition software
  • using cheap human labor to process the tests (see below)

[edit]Insecure implementation

Like any security system, design flaws in a system implementation can prevent the theoretical security from being realized. Many CAPTCHA implementations, especially those which have not been designed and reviewed by experts in the fields of security, are prone to common attacks.
Some CAPTCHA protection systems can be bypassed without using OCR simply by re-using the session ID of a known test image. A correctly designed CAPTCHA does not allow multiple solution attempts at the same test, which would allow unlimited reuse of a correct solution, or a second guess after an incorrect OCR attempt.[8] Other CAPTCHA implementations use a hash (such as an MD5 hash) of the solution as a key passed to the client to validate the answer. Further, the hash could assist an OCR based attempt. A more secure scheme would use an HMAC (Hash-based message authentication code). Another example is directly provide answer in the code such as showing four pictures to let user pickup the correct one, a spam bot can always guess first picture to gain 25% success rate in this case. Finally, some implementations use only a small fixed pool of CAPTCHA images. Eventually, when enough image solutions have been collected by an attacker over a period of time, the test can be broken by simply looking up solutions in a table, based on a hash of the challenge image.

[edit]Computer character recognition

A number of research projects have attempted (often successfully[citation needed]) to beat visual CAPTCHAs by creating programs that contain the following functionality:
  1. Pre-processing: Removal of background clutter and noise
  2. Segmentation: Splitting the image into regions which each contain a single character
  3. Classification: Identifying the character in each region
Steps 1 and 3 are easy tasks for computers.[9] The only step where humans still outperform computers is segmentation. If the background clutter consists of shapes similar to letter shapes, and the letters are connected by this clutter, the segmentation becomes nearly impossible with current software. Hence, an effective CAPTCHA should focus on the segmentation.
Several research projects have broken real world CAPTCHAs, including one of Yahoo!'s early CAPTCHAs called "EZ-Gimpy",[10] the CAPTCHAs used by popular sites such as PayPal,[11] LiveJournal, phpBB, the e-banking CAPTCHAs used by many financial institutions,[12] and CAPTCHAs used by other services.[13][14][15] In January 2008, Network Security Research released their program for automated Yahoo! CAPTCHA recognition.[16] Windows Live Hotmail and Gmail, the other two major free email providers, were cracked shortly after.[17][18]
In February 2008, it was reported that spammers had achieved a success rate of 30% to 35%, using a bot to respond to CAPTCHAs for Microsoft's Live Mail service[19] and a success rate of 20% against Google's Gmail CAPTCHA.[20] A Newcastle University research team has defeated the segmentation part of Microsoft's CAPTCHA with a 90% success rate, and reported that this could lead to a complete crack with a greater than 60% rate.[21]

[edit]Human solvers

CAPTCHA is vulnerable to a relay attack that uses humans to solve the puzzles. One approach involves relaying the puzzles to a group of human operators who can solve CAPTCHAs. In this scheme, a computer fills out a form and when it reaches a CAPTCHA, it gives the CAPTCHA to the human operator to solve.
Spammers pay about $0.80 to $1.20 for each 1,000 solved CAPTCHAs to companies employing human solvers in Bangladesh, China, India, and many other developing nations.[22]Other sources cite a cost as low as $0.50 for each 1,000 solved.[23]
Another approach involves copying the CAPTCHA images and using them as CAPTCHAs for a high-traffic site owned by the attacker. With enough traffic, the attacker can get a solution to the CAPTCHA puzzle in time to relay it back to the target site.[24] In October 2007, a piece of malware appeared in the wild which enticed users to solve CAPTCHAs in order to see progressively further into a series of striptease images.[25][26] A more recent view is that this is unlikely to work due to unavailability of high-traffic sites and competition by similar sites.[27]
These methods have been used by spammers to set up thousands of accounts on free email services such as Gmail and Yahoo![28] Since Gmail and Yahoo! are unlikely to be blacklisted by anti-spam systems, spam sent through these compromised accounts is less likely to be blocked.

[edit]Legal concerns

The circumvention of CAPTCHAs may violate the anti-circumvention clause of the Digital Millennium Copyright Act (DMCA) in the United States. In 2007, Ticketmaster sued software maker RMG Technologies[29] for its product which circumvented the ticket seller's CAPTCHAs on the basis that it violated the anti-circumvention clause of the DMCA. In October 2007, an injunction was issued stating that Ticketmaster would likely succeed in making its case.[30] In June 2008, Ticketmaster filed for default judgment against RMG. The Court granted Ticketmaster the default and entered an $18.2M judgment in favor of Ticketmaster.
In 2010, encouraged by Ticketmaster, the U.S. Attorney in Newark, New Jersey won a grand jury indictment against Wiseguy Tickets, Inc. for purchasing tickets in bulk by circumventing CAPTCHA mechanisms.[31] Among its 43 findings, the grand jury found Wiseguy Tickets Inc defeated online ticket vendors' security mechanisms CAPTCHA.[32]

[edit]Interaction with images as an alternative to texting (text typing)

Some researchers promote interaction with images as a possible alternative for texting CAPTCHAs, given the common feeling that they are "one of the most hated pieces of user interaction on the web".[33]
Computer-based recognition algorithms require the extraction of color, texture, shape, or special point features, which cannot be correctly extracted after the designed distortions. However, humans can still recognize the original concept depicted in the images even with these distortions.
File:CAPTCHA Image Grid from Picatcha.png
Image-Identification CAPTCHA.
A recent example of interacting with images CAPTCHA is to present the website visitor with a grid of random pictures and instruct the visitor to click on specific pictures to verify that they are not a bot (such as “Click on the pictures of the airplane, the boat and the clock”).
Image interaction CAPTCHAs face many potential problems which have not been fully studied. It is difficult for a small site to acquire a large dictionary of images to which an attacker does not have access and without a means of automatically acquiring new labelled images, an image-based challenge does not usually meet the definition of a CAPTCHA. KittenAuth, by default, had only 42 images in its database.[34] Microsoft's "Asirra", which it is providing as a free web service, attempts to address this by means of Microsoft Research's partnership with Petfinder.com, which has provided it with more than three million images of cats and dogs, classified by people at thousands of US animal shelters.[35] Researchers claim to have written a program that can break the Microsoft Asirra CAPTCHA.[36] Extending the number of categories (more than just cats and dogs) and randomizing the number of correct images in a grid increases the security of the system. The IMAGINATION CAPTCHA, however, uses a sequence of randomized distortions on the original images to create the CAPTCHA images. Their original images can be made public without risk of image-retrieval or image-annotation based attacks.
Human solvers are a potential weakness for strategies such as Asirra. If the database of cat and dog photos can be downloaded, paying workers $0.01 to classify each photo as of either a dog or a cat means that almost the entire database of photos can be deciphered for $30,000. Photos that are subsequently added to the Asirra database are then a relatively small data set that can be classified as they first appear. Causing minor changes to images each time they appear will not prevent a computer from recognizing a repeated image as there are robust image comparator functions (e.g., image hashescolor histograms) that are insensitive to many simple image distortions. Warping an image sufficiently to fool a computer will likely also be troublesome to a human.[37]
Researchers at Google used image orientation and collaborative filtering as a CAPTCHA.[38] Generally speaking, people know what "up" is but computers have a difficult time for a broad range of images. Images were pre-screened to be determined to be difficult to detect up (e.g., no skies, no faces, no text). Images were also collaboratively filtered by showing a "candidate" image along with good images for the person to rotate. If there was a large variance in answers for the candidate image, it was deemed too hard for people as well and discarded.
Many users[who?] of the phpBB forum software (which has suffered greatly from spam) have implemented an open source image recognition CAPTCHA system in the form of an addon called KittenAuth[39] which in its default form presents a question requiring the user to select a stated type of animal from an array of thumbnail images of assorted animals. The images (and the challenge questions) can be customized, for example to present questions and images which would be easily answered by the forum's target userbase. Furthermore, for a time, RapidShare free users had to get past a CAPTCHA where they had to enter letters attached only to a cat, while others were attached to dogs.[40] This was later removed because (legitimate) users had trouble entering the correct letters.

BROWSER HISTORY



The first web browser was invented in 1990 by Tim Berners-Lee. It was called WorldWideWeb (no spaces) and was later renamed Nexus.[1] In 1993, Marc Andreesen created a browser that was easy to use and install with the release of Mosaic (later Netscape),[2] "the world's first popular browser",[3] which made the World Wide Web system easy to use and more accessible to the average person. Andreesen's browser sparked the internet boom of the 1990s.[3] These are the two major milestones in the history of the Web.


1980s to early 1990s

In 1984, expanding on ideas from futurist Ted Nelson, Neil Larson's commercial DOS Maxthink outline program added angle bracket hypertext jumps (adopted by later web browsers) to and from ASCII, batch, and other Maxthink files up to 32 levels deep.[citation needed] In 1986 he released his DOS Houdini network browser program that supported 2500 topics cross-connected with 7500 links in each file along with hypertext links among unlimited numbers of external ASCII, batch, and other Houdini files.[citation needed]
In 1987, these capabilities were included in his then popular shareware DOS file browser programs HyperRez (memory resident) and PC Hypertext (which also added jumps to programs, editors, graphic files containing hot spots jumps, and cross-linked theraurus/glossary files). These programs introduced many to the browser concept and 20 years later, Google still lists 3,000,000 references to PC Hypertext. In 1989, he created both HyperBBS and HyperLan which both allow multiple users to create/edit both topics and jumps for information and knowledge annealing which, in concept, the columnist John C. Dvorak says pre-dated Wiki by many years.[citation needed]
From 1987 on, he also created TransText (hypertext word processor) and many utilities for rapidly building large scale knowledge systems ... and in 1989 helped produce for one of the big eight accounting firms[citation needed] a comprehensive knowledge system of integrating all accounting laws/regulations into a CDROM containing 50,000 files with 200,000 hypertext jumps. Additionally, the Lynx (a very early web-based browser) development history notes their project origin was based on the browser concepts from Neil Larson and Maxthink.[4] In 1989, he declined joining the Mosaic browser team with his preference for knowledge/wisdom creation over distributing information ... a problem he says is still not solved by today's internet.
Another early browser, Silversmith, was created by John Bottoms in 1987.[5] The browser, based on SGML tags,[6] used a tag set from the Electronic Document Project of the AAP with minor modifications and was sold to a number of early adopters. At the time SGML was used exclusively for the formatting of printed documents.[7] The use of SGML for electronically displayed documents signaled a shift in electronic publishing and was met with considerable resistance. Silversmith included an integrated indexer, full text searches, hypertext links between images text and sound using SGML tags and a return stack for use with hypertext links. It included features that are still not available in today's browsers. These include capabilities such as the ability to restrict searches within document structures, searches on indexed documents using wild cards and the ability to search on tag attribute values and attribute names.
Starting in 1988, Peter Scott and Earle Fogel expanded the earlier HyperRez concept in creating Hytelnet which added jumps to telnet sites ... and which by 1990 offered users instant logon and access to the online catalogs of over 5000 libraries around the world. The strength of Hytelnet was speed and simplicity in link creation/execution at the expense of a centralized world wide source for adding, indexing, and modifying telnet links.[citation needed] This problem was solved by the invention of the web server.

The NeXT Computer which Berners-Lee used.
NeXT Computer was used by Tim Berners-Lee (who pioneered the use of hypertext for sharing information) as the world's firstWeb server, and also an early Web browser, WorldWideWeb in 1990. Berners-Lee introduced it to colleagues at CERN in March 1991. Since then the development of Web browsers has been inseparably intertwined with the development of the Web itself.
In April 1990, a draft patent application for a mass market consumer device for browsing pages via links "PageLink" was proposed by Craig Cockburn at Digital Equipment Co Ltd (DEC) whilst working in their Networking and Communications division in Reading, England. This application for a keyboardless touch screen browser for consumers also makes reference to "navigating and searching text" and "bookmarks" was aimed at (quotes paraphrased) "replacing books", "storing a shopping list" "have an updated personalised newspaper updated round the clock", "dynamically updated maps for use in a car" and suggests such a device could have a "profound effect on the advertising industry". The patent was canned by Digital as too futuristic and, being largely hardware based, had obstacles to market that purely software driven approaches did not suffer from.

[edit]Early 1990s: WWW browsers


A graph showing the market share of Unix vs Windows browsers.
In 1992, Tony Johnson released the MidasWWW browser. Based on Motif/X, MidasWWW allowed viewing of PostScript files on the Web from Unix and VMS, and even handled compressed PostScript.[8] Another early popular Web browser was ViolaWWW, which was modeled after HyperCard.
Thomas R. Bruce of the Legal Information Institute at Cornell Law School started 1992 to develop Cello and released on 8 June 1993 the first web browser which was working on Windows 3.1NT 3.5, and OS/2.
However, the explosion in popularity of the Web was triggered by NCSA Mosaic which was a graphical browser running originally on Unixand soon ported to the Amiga and VMS platforms, and later the Apple Macintosh and Microsoft Windows platforms. Version 1.0 was released in September 1993,[9] and was dubbed the killer application of the Internet. It was the first web browser to display images inline with the document's text.[10] Prior browsers would display an icon that, when clicked, would download and open the graphic file in a helper application. This was an intentional design decision on both parts, as the graphics support in early browsers was intended for displaying charts and graphs associated with technical papers while the user scrolled to read the text, while Mosaic was trying to bring multimedia content to nontechnical users. Marc Andreessen, who was the leader of the Mosaic team at NCSA, quit to form a company that would later be known as Netscape Communications Corporation. Netscape released its flagship Navigator product in October 1994, and it took off the next year.
IBM presented its own Web Explorer with OS/2 Warp in 1994.
UdiWWW was the first web browser that was able to handle all HTML 3 features with the math tags released 1995. Following the release of version 1.2 in April 1996, Bernd Richter ceased development, stating "let Microsoft with the ActiveX Development Kit do the rest."[11][12][13]
Microsoft, which had thus far not marketed a browser (in fact even as late as 1995 Bill Gates dismissed personal use of the World Wide Web as a passing fad)[citation needed], finally entered the fray with its Internet Explorer product (version 1.0 was released 16 August 1995), purchased from Spyglass, Inc. This began what is known as the "browser wars" in which Microsoft and Netscape competed for the Web browser market.
The wars put the Web in the hands of millions of ordinary PC users, but showed how commercialization of the Web could stymie standards efforts. Both Microsoft and Netscape liberally incorporated proprietary extensions to HTML in their products, and tried to gain an edge by product differentiation, leading to the acceptance of the Cascading Style Sheetsproposed by Håkon Wium Lie over Netscape's JavaScript Style Sheets (JSSS) by W3C.

[edit]Late 1990s: Microsoft vs Netscape

In 1996, Netscape's share of the browser market reached 86% (with Internet Explorer edging up 10%); but then Microsoft began integrating its browser with its operating system and bundling deals with OEMs, and within two years the balance had reversed. Although Microsoft has since faced antitrust litigation on these charges, the browser wars effectively ended once it was clear that Netscape's declining market share trend was irreversible. Prior to the release of Mac OS XInternet Explorer for Mac and Netscape were also the primary browsers in use on the Macintosh platform.
Unable to continue commercially funding their product's development, Netscape responded by open sourcing its product, creating Mozilla. This helped the browser maintain its technical edge over Internet Explorer, but did not slow Netscape's declining market share. Netscape was purchased by America Online in late 1998.