Posted by
Raj Vaishnaw
comments (0)
Limitations
Some noted disadvantages of using "pure" CSS include:
Inconsistent browser support
Different browsers will render CSS layout differently as a result of browser bugs or lack of support for CSS features. For example Microsoft Internet Explorer, whose older versions, such as IE 6.0 - IE 8.0 BETA, implemented many CSS 2.0 properties in its own, incompatible way, misinterpreted a significant number of important properties, such as width, height, and float. Numerous so-called CSS "hacks" must be implemented to achieve consistent layout among the most popular or commonly used browsers. Pixel precise layouts can sometimes be impossible to achieve across browsers.
Selectors are unable to ascend
CSS offers no way to select a parent or ancestor of element that satisfies certain criteria. A more advanced selector scheme (such as XPath) would enable more sophisticated stylesheets. However, the major reasons for the CSS Working Group rejecting proposals for parent selectors are related to browser performance and incremental rendering issues.
One block declaration cannot explicitly inherit from another
Inheritance of styles is performed by the browser based on the containment hierarchy of DOM elements and the specificity of the rule selectors, as suggested by the section 6.4.1 of the CSS2 specification. Only the user of the blocks can refer to them by including class names into the class attribute of a DOM element.
Vertical control limitations
While horizontal placement of elements is generally easy to control, vertical placement is frequently unintuitive, convoluted, or impossible. Simple tasks, such as centering an element vertically or getting a footer to be placed no higher than bottom of viewport, either require complicated and unintuitive style rules, or simple but widely unsupported rules.
Absence of expressions
There is currently no ability to specify property values as simple expressions (such as margin-left: 10% - 3em + 4px;). This is useful in a variety of cases, such as calculating the size of columns subject to a constraint on the sum of all columns. However, a working draft with a calc() value to address this limitation has been published by the CSS WG, and Internet Explorer 5 and all later versions support a proprietary expression() statement, with similar functionality.
Lack of orthogonality
Multiple properties often end up doing the same job. For instance, position, display and float specify the placement model, and most of the time they cannot be combined meaningfully. A display: table-cell element cannot be floated or given position: relative, and an element with float: left should not react to changes of display. In addition, some properties are not defined in a flexible way that avoids creation of new properties. For example, you should use the "border-spacing" property on table element instead of the "margin-*" property on table cell elements. This is because according to the CSS specification, internal table elements do not have margins.
Margin collapsing
Margin collapsing is, while well-documented and useful, also complicated and is frequently not expected by authors, and no simple side-effect-free way is available to control it.
Float containment
CSS does not explicitly offer any property that would force an element to contain floats. Multiple properties offer this functionality as a side effect, but none of them are completely appropriate in all situations. As there will be an overflow when the elements, which is contained in a container, use float property. Generally, either "position: relative" or "overflow: hidden" solves this. Floats will be different according to the web browser size and resolution, but positions can not.
Lack of multiple backgrounds per element
Highly graphical designs require several background images for every element, and CSS can support only one. Therefore, developers have to choose between adding redundant wrappers around document elements, or dropping the visual effect. This is partially addressed in the working draft of the CSS3 backgrounds module, which is already supported in Safari and Konqueror.
Control of Element Shapes
CSS currently only offers rectangular shapes. Rounded corners or other shapes may require non-semantic markup. However, this is addressed in the working draft of the CSS3 backgrounds module.
Lack of Variables
CSS contains no variables. This makes it necessary to do a "replace-all" when one desires to change a fundamental constant, such as the color scheme or various heights and widths. This may not even be possible to do in a reasonable way (consider the case where one wants to replace certain heights which are 50px, but not others which are also 50px; this would require very complicated regular expressions). In turn, many developers are now using PHP to control and output the CSS file by either CSS @import/PHP require, or by declaring a different header in the PHP/CSS document for the correct parsing mode. The main disadvantage to this is the lack of CSS caching, but can be very useful in many situations.
Lack of column declaration
While possible in current CSS, layouts with multiple columns can be complex to implement. With the current CSS, the process is often done using floating elements which are often rendered differently by different browsers, different computer screen shapes, and different screen ratios set on standard monitors.
Cannot explicitly declare new scope independently of position
Scoping rules for properties such as z-height look for the closest parent element with a position:absolute or position:relative attribute. This odd coupling has two undesired effects: 1) it is impossible to avoid declaring a new scope when one is forced to adjust an element's position, preventing one from using the desired scope of a parent element and 2) users are often not aware that they must declare position:relative or position:absolute on any element they want to act as "the new scope". Additionally, a bug in the Firefox browser prevents one from declaring table elements as a new css scope using position:relative (one can technically do so, but numerous graphical glitches result).
Advantages
By combining CSS with the functionality of a Content Management System, a considerable amount of flexibility can be programmed into content submission forms. This allows a contributor, who may not be familiar or able to understand or edit CSS or HTML code to select the layout of an article or other page they are submitting on-the-fly, in the same form. For instance, a contributor, editor or author of an article or page might be able to select the number of columns and whether or not the page or article will carry an image. This information is then passed to the Content Management System, and the program logic will evaluate the information and determine, based on a certain number of combinations, how to apply classes and IDs to the HTML elements, therefore styling and positioning them according to the pre-defined CSS for that particular layout type. When working with large-scale, complex sites, with many contributors such as news and informational sites, this advantage weighs heavily on the feasibility and maintenance of the project.
When CSS is used effectively, in terms of inheritance and "cascading," a global stylesheet can be used to affect and style elements site-wide. If the situation arises that the styling of the elements should need to be changed or adjusted, these changes can be made easily, simply by editing a few rules in the global stylesheet. Before CSS, this sort of maintenance was more difficult, expensive and time-consuming.
Web Management India
Web Solution Tools.
Mrf Web Design
Mrf Web Development
Mrf Web Development
Some noted disadvantages of using "pure" CSS include:
Inconsistent browser support
Different browsers will render CSS layout differently as a result of browser bugs or lack of support for CSS features. For example Microsoft Internet Explorer, whose older versions, such as IE 6.0 - IE 8.0 BETA, implemented many CSS 2.0 properties in its own, incompatible way, misinterpreted a significant number of important properties, such as width, height, and float. Numerous so-called CSS "hacks" must be implemented to achieve consistent layout among the most popular or commonly used browsers. Pixel precise layouts can sometimes be impossible to achieve across browsers.
Selectors are unable to ascend
CSS offers no way to select a parent or ancestor of element that satisfies certain criteria. A more advanced selector scheme (such as XPath) would enable more sophisticated stylesheets. However, the major reasons for the CSS Working Group rejecting proposals for parent selectors are related to browser performance and incremental rendering issues.
One block declaration cannot explicitly inherit from another
Inheritance of styles is performed by the browser based on the containment hierarchy of DOM elements and the specificity of the rule selectors, as suggested by the section 6.4.1 of the CSS2 specification. Only the user of the blocks can refer to them by including class names into the class attribute of a DOM element.
Vertical control limitations
While horizontal placement of elements is generally easy to control, vertical placement is frequently unintuitive, convoluted, or impossible. Simple tasks, such as centering an element vertically or getting a footer to be placed no higher than bottom of viewport, either require complicated and unintuitive style rules, or simple but widely unsupported rules.
Absence of expressions
There is currently no ability to specify property values as simple expressions (such as margin-left: 10% - 3em + 4px;). This is useful in a variety of cases, such as calculating the size of columns subject to a constraint on the sum of all columns. However, a working draft with a calc() value to address this limitation has been published by the CSS WG, and Internet Explorer 5 and all later versions support a proprietary expression() statement, with similar functionality.
Lack of orthogonality
Multiple properties often end up doing the same job. For instance, position, display and float specify the placement model, and most of the time they cannot be combined meaningfully. A display: table-cell element cannot be floated or given position: relative, and an element with float: left should not react to changes of display. In addition, some properties are not defined in a flexible way that avoids creation of new properties. For example, you should use the "border-spacing" property on table element instead of the "margin-*" property on table cell elements. This is because according to the CSS specification, internal table elements do not have margins.
Margin collapsing
Margin collapsing is, while well-documented and useful, also complicated and is frequently not expected by authors, and no simple side-effect-free way is available to control it.
Float containment
CSS does not explicitly offer any property that would force an element to contain floats. Multiple properties offer this functionality as a side effect, but none of them are completely appropriate in all situations. As there will be an overflow when the elements, which is contained in a container, use float property. Generally, either "position: relative" or "overflow: hidden" solves this. Floats will be different according to the web browser size and resolution, but positions can not.
Lack of multiple backgrounds per element
Highly graphical designs require several background images for every element, and CSS can support only one. Therefore, developers have to choose between adding redundant wrappers around document elements, or dropping the visual effect. This is partially addressed in the working draft of the CSS3 backgrounds module, which is already supported in Safari and Konqueror.
Control of Element Shapes
CSS currently only offers rectangular shapes. Rounded corners or other shapes may require non-semantic markup. However, this is addressed in the working draft of the CSS3 backgrounds module.
Lack of Variables
CSS contains no variables. This makes it necessary to do a "replace-all" when one desires to change a fundamental constant, such as the color scheme or various heights and widths. This may not even be possible to do in a reasonable way (consider the case where one wants to replace certain heights which are 50px, but not others which are also 50px; this would require very complicated regular expressions). In turn, many developers are now using PHP to control and output the CSS file by either CSS @import/PHP require, or by declaring a different header in the PHP/CSS document for the correct parsing mode. The main disadvantage to this is the lack of CSS caching, but can be very useful in many situations.
Lack of column declaration
While possible in current CSS, layouts with multiple columns can be complex to implement. With the current CSS, the process is often done using floating elements which are often rendered differently by different browsers, different computer screen shapes, and different screen ratios set on standard monitors.
Cannot explicitly declare new scope independently of position
Scoping rules for properties such as z-height look for the closest parent element with a position:absolute or position:relative attribute. This odd coupling has two undesired effects: 1) it is impossible to avoid declaring a new scope when one is forced to adjust an element's position, preventing one from using the desired scope of a parent element and 2) users are often not aware that they must declare position:relative or position:absolute on any element they want to act as "the new scope". Additionally, a bug in the Firefox browser prevents one from declaring table elements as a new css scope using position:relative (one can technically do so, but numerous graphical glitches result).
Advantages
By combining CSS with the functionality of a Content Management System, a considerable amount of flexibility can be programmed into content submission forms. This allows a contributor, who may not be familiar or able to understand or edit CSS or HTML code to select the layout of an article or other page they are submitting on-the-fly, in the same form. For instance, a contributor, editor or author of an article or page might be able to select the number of columns and whether or not the page or article will carry an image. This information is then passed to the Content Management System, and the program logic will evaluate the information and determine, based on a certain number of combinations, how to apply classes and IDs to the HTML elements, therefore styling and positioning them according to the pre-defined CSS for that particular layout type. When working with large-scale, complex sites, with many contributors such as news and informational sites, this advantage weighs heavily on the feasibility and maintenance of the project.
When CSS is used effectively, in terms of inheritance and "cascading," a global stylesheet can be used to affect and style elements site-wide. If the situation arises that the styling of the elements should need to be changed or adjusted, these changes can be made easily, simply by editing a few rules in the global stylesheet. Before CSS, this sort of maintenance was more difficult, expensive and time-consuming.
Web Management India
Web Solution Tools.
Mrf Web Design
Mrf Web Development
Mrf Web Development
Posted by
Raj Vaishnaw
comments (0)
Cascading Style Sheets (CSS) is a stylesheet language used to describe the presentation of a document written in a markup language. Its most common application is to style web pages written in HTML and XHTML, but the language can be applied to any kind of XML document, including SVG and XUL.
CSS can be used locally by the readers of web pages to define colors, fonts, layout, and other aspects of document presentation. It is designed primarily to enable the separation of document content (written in HTML or a similar markup language) from document presentation (written in CSS). This separation can improve content accessibility, provide more flexibility and control in the specification of presentation characteristics, and reduce complexity and repetition in the structural content (such as by allowing for tableless web design). CSS can also allow the same markup page to be presented in different styles for different rendering methods, such as on-screen, in print, by voice (when read out by a speech-based browser or screen reader) and on Braille-based, tactile devices. CSS specifies a priority scheme to determine which style rules apply if more than one rule matches against a particular element. In this so-called cascade, priorities or weights are calculated and assigned to rules, so that the results are predictable.
The CSS specifications are maintained by the World Wide Web Consortium (W3C). Internet media type (MIME type) text/css is registered for use with CSS by RFC 2318 (March 1998)
Syntax
CSS has a simple syntax, and uses a number of English keywords to specify the names of various style properties.
A style sheet consists of a list of rules. Each rule or rule-set consists of one or more selectors and a declaration block. A declaration-block consists of a list of semicolon-separated declarations in braces. Each declaration itself consists of a property, a colon (:), a value, then a semi-colon (;).
In CSS, selectors are used to declare which elements a style applies to, a kind of match expression. Selectors may apply to all elements of a specific type, or only those elements which match a certain attribute; elements may be matched depending on how they are placed relative to each other in the markup code, or on how they are nested within the document object model.
In addition to these, a set of pseudo-classes can be used to define further behavior. Probably the best-known of these is :hover, which applies a style only when the user 'points to' the visible element, usually by holding the mouse cursor over it. It is appended to a selector as in a:hover or #elementid:hover. Other pseudo-classes and pseudo-elements are, for example, :first-line, :visited or :before. A special pseudo-class is :lang(c), "c".
A pseudo-class selects entire elements, such as :link or :visited, whereas a pseudo-element makes a selection that may consist of partial elements, such as :first-line or :first-letter.
Selectors may be combined in other ways too, especially in CSS 2.1, to achieve greater specificity and flexibility.
[edit] Use of CSS
Prior to CSS, nearly all of the presentational attributes of HTML documents were contained within the HTML markup; all font colors, background styles, element alignments, borders and sizes had to be explicitly described, often repeatedly, within the HTML. CSS allows authors to move much of that information to a separate stylesheet resulting in considerably simpler HTML markup.
Headings (h1 elements), sub-headings (h2), sub-sub-headings (h3), etc., are defined structurally using HTML. In print and on the screen, choice of font, size, color and emphasis for these elements is presentational.
Prior to CSS, document authors who wanted to assign such typographic characteristics to, say, all h2 headings had to use the HTML font and other presentational elements for each occurrence of that heading type. The additional presentational markup in the HTML made documents more complex, and generally more difficult to maintain. In CSS, presentation is separated from structure. In print, CSS can define color, font, text alignment, size, borders, spacing, layout and many other typographic characteristics. It can do so independently for on-screen and printed views. CSS also defines non-visual styles such as the speed and emphasis with which text is read out by aural text readers. The W3C now considers the advantages of CSS for defining all aspects of the presentation of HTML pages to be superior to other methods. It has therefore deprecated the use of all the original presentational HTML markup.
[edit] Sources
CSS information can be provided by various sources. CSS style information can be either attached as a separate document or embedded in the HTML document. Multiple style sheets can be imported. Different styles can be applied depending on the output device being used; for example, the screen version can be quite different from the printed version, so that authors can tailor the presentation appropriately for each medium.
* Author styles (style information provided by the web page author), in the form of
o external stylesheets, i.e. a separate CSS-file referenced from the document
o embedded style, blocks of CSS information inside the HTML document itself
o inline styles, inside the HTML document, style information on a single element, specified using the "style" attribute.
* User style
o a local CSS-file specified by the user using options in the web browser, and acting as an override, to be applied to all documents.
* User agent style
o the default style sheet applied by the user agent, e.g. the browser's default presentation of elements.
One of the goals of CSS is also to allow users a greater degree of control over presentation; those who find the red italic headings difficult to read may apply other style sheets to the document. Depending on their browser and the web site, a user may choose from various stylesheets provided by the designers, may remove all added style and view the site using their browser's default styling or may perhaps override just the red italic heading style without altering other attributes.
File highlightheaders.css containing:
h1 { color: white; background: orange !important; }
h2 { color: white; background: green !important; }
Such a file is stored locally and is applicable if that has been specified in the browser options. "!important" means that it prevails over the author specifications.
History
Style sheets have existed in one form or another since the beginnings of SGML in the 1970s. Cascading Style Sheets were developed as a means for creating a consistent approach to providing style information for web documents.
As HTML grew, it came to encompass a wider variety of stylistic capabilities to meet the demands of web developers. This evolution gave the designer more control over site appearance but at the cost of HTML becoming more complex to write and maintain. Variations in web browser implementations made consistent site appearance difficult, and users had less control over how web content was displayed.
To improve the capabilities of web presentation, nine different style sheet languages were proposed to the W3C's www-style mailing list. Of the nine proposals, two were chosen as the foundation for what became CSS: Cascading HTML Style Sheets (CHSS) and Stream-based Style Sheet Proposal (SSP). First, Håkon Wium Lie (now the CTO of Opera Software) proposed Cascading HTML Style Sheets (CHSS) in October 1994, a language which has some resemblance to today's CSS. Bert Bos was working on a browser called Argo which used its own style sheet language, Stream-based Style Sheet Proposal (SSP). Lie and Bos worked together to develop the CSS standard (the 'H' was removed from the name because these style sheets could be applied to other markup languages besides HTML).
Unlike existing style languages like DSSSL and FOSI, CSS allowed a document's style to be influenced by multiple style sheets. One style sheet could inherit or "cascade" from another, permitting a mixture of stylistic preferences controlled equally by the site designer and user.
Håkon's proposal was presented at the "Mosaic and the Web" conference in Chicago, Illinois in 1994, and again with Bert Bos in 1995. Around this time, the World Wide Web Consortium was being established; the W3C took an interest in the development of CSS, and it organized a workshop toward that end chaired by Steven Pemberton. This resulted in W3C adding work on CSS to the deliverables of the HTML editorial review board (ERB). Håkon and Bert were the primary technical staff on this aspect of the project, with additional members, including Thomas Reardon of Microsoft, participating as well. By the end of 1996, CSS was ready to become official, and the CSS level 1 Recommendation was published in December.
Development of HTML, CSS, and the DOM had all been taking place in one group, the HTML Editorial Review Board (ERB). Early in 1997, the ERB was split into three working groups: HTML Working group, chaired by Dan Connolly of W3C; DOM Working group, chaired by Lauren Wood of SoftQuad; and CSS Working group, chaired by Chris Lilley of W3C.
The CSS Working Group began tackling issues that had not been addressed with CSS level 1, resulting in the creation of CSS level 2 on November 4, 1997. It was published as a W3C Recommendation on May 12, 1998. CSS level 3, which was started in 1998, is still under development as of 2008.
In 2005 the CSS Working Groups decided to enforce the requirements for standards more strictly. This meant that already published standards like CSS 2.1, CSS 3 Selectors and CSS 3 Text were pulled back from Candidate Recommendation to Working Draft level.
[edit] Difficulty with adoption
Although the CSS1 specification was completed in 1996 and Microsoft's Internet Explorer 3 was released in that year featuring some limited support for CSS, it would be more than three years before any web browser achieved near-full implementation of the specification. Internet Explorer 5.0 for the Macintosh, shipped in March 2000, was the first browser to have full (better than 99 percent) CSS1 support[citation needed], surpassing Opera, which had been the leader since its introduction of CSS support 15 months earlier. Other browsers followed soon afterwards, and many of them additionally implemented parts of CSS2. As of July 2008, no (finished) browser has fully implemented CSS2, with implementation levels varying (see Comparison of layout engines (CSS)).
Even though early browsers such as Internet Explorer 3 and 4, and Netscape 4.x had support for CSS, it was typically incomplete and afflicted with serious bugs. This was a serious obstacle for the adoption of CSS.
When later 'version 5' browsers began to offer a fairly full implementation of CSS, they were still incorrect in certain areas and were fraught with inconsistencies, bugs and other quirks. The proliferation of such CSS-related inconsistencies and even the variation in feature support has made it difficult for designers to achieve a consistent appearance across platforms. Some authors commonly resort to using some workarounds such as CSS hacks and CSS filters in order to obtain consistent results across web browsers and platforms.
Problems with browsers' patchy adoption of CSS along with errata in the original specification led the W3C to revise the CSS2 standard into CSS2.1, which may be regarded as something nearer to a working snapshot of current CSS support in HTML browsers. Some CSS2 properties which no browser had successfully implemented were dropped, and in a few cases, defined behaviours were changed to bring the standard into line with the predominant existing implementations. CSS2.1 became a Candidate Recommendation on February 25, 2004, but css-21 was pulled back to Working Draft status on June 13, 2005,and only returned to Candidate Recommendation status on July 19, 2007.
In the past, some web servers were configured to serve all documents with the filename extension .css as mime type application/x-pointplus rather than text/css. At the time, the Net-Scene company was selling PointPlus Maker to convert PowerPoint files into Compact Slide Show files (using a .css extension).[citation needed]
Variations
CSS has various levels and profiles. Each level of CSS builds upon the last, typically adding new features and typically denoted as CSS1, CSS2, and CSS3. Profiles are typically a subset of one or more levels of CSS built for a particular device or user interface. Currently there are profiles for mobile devices, printers, and television sets. Profiles should not be confused with media types which were added in CSS2.
CSS 1
The first CSS specification to become an official W3C Recommendation is CSS level 1, published in December 1996.[5] Among its capabilities are support for:
* Font properties such as typeface and emphasis
* Color of text, backgrounds, and other elements
* Text attributes such as spacing between words, letters, and lines of text
* Alignment of text, images, tables and other elements
* Margin, border, padding, and positioning for most elements
* Unique identification and generic classification of groups of attributes
The W3C maintains the CSS1 Recommendation.
CSS 2
CSS level 2 was developed by the W3C and published as a Recommendation in May 1998. A superset of CSS1, CSS2 includes a number of new capabilities like absolute, relative, and fixed positioning of elements, the concept of media types, support for aural style sheets and bidirectional text, and new font properties such as shadows. The W3C maintains the CSS2 Recommendation.
CSS level 2 revision 1 or CSS 2.1 fixes errors in CSS2, removes poorly-supported features and adds already-implemented browser extensions to the specification. While it was a Candidate Recommendation for several months, on June 15, 2005 it was reverted to a working draft for further review. It was returned to Candidate Recommendation status on 19 July 2007.
CSS 3
CSS level 3 is currently under development.[9] The W3C maintains a CSS3 progress report. CSS3 is modularized and will consist of several separate Recommendations. The W3C CSS3 Roadmap provides a summary and introduction.
[edit] Browser support
A CSS filter is a coding technique that aims to effectively hide or show parts of the CSS to different browsers, either by exploiting CSS-handling quirks or bugs in the browser, or by taking advantage of lack of support for parts of the CSS specifications. Using CSS filters, some designers have gone as far as delivering entirely different CSS to certain browsers in order to ensure that designs are rendered as expected. Because very early web browsers were either completely incapable of handling CSS, or render CSS very poorly, designers today often routinely use CSS filters that completely prevent these browsers from accessing any of the CSS. Internet Explorer support for CSS began with IE 3.0 and increased progressively with each version. By 2008, the first Beta of Internet Explorer 8 offered support for CSS 2.1 in its best web standards mode.
An example of a well-known CSS browser bug is the Internet Explorer box model bug, where box widths are interpreted incorrectly in several versions of the browser, resulting in blocks which are too narrow when viewed in Internet Explorer, but correct in standards-compliant browsers. The bug can be avoided in Internet Explorer 6 by using the correct doctype in (X)HTML documents. CSS hacks and CSS filters are used to compensate for bugs such as this, just one of hundreds of CSS bugs that have been documented in various versions of Netscape, Mozilla Firefox, Opera, and Internet Explorer (including Internet Explorer 7).
Even when the availability of CSS-capable browsers made CSS a viable technology, the adoption of CSS was still held back by designers' struggles with browsers' incorrect CSS implementation and patchy CSS support. Even today, these problems continue to make the business of CSS design more complex and costly than it should be, and cross-browser testing remains a necessity. Other reasons for continuing non-adoption of CSS are: its perceived complexity, authors' lack of familiarity with CSS syntax and required techniques, poor support from authoring tools, the risks posed by inconsistency between browsers and the increased costs of testing.
Currently there is strong competition between Mozilla's Gecko layout engine, the WebKit layout engine used in Apple's Safari, the similar KHTML engine used in KDE's Konqueror browser, and Opera's Presto layout engine - each of them is leading in different aspects of CSS. As of 2007, Internet Explorer's Trident engine remains the worst at rendering CSS as judged by World Wide Web Consortium standards. [14] [15] In April 2008 Internet Explorer 8 beta fixes many of these shortcomings and renders CSS 2.1.[citation needed] The IEBlog claims that it passes some versions of the ACID2 test.
Web Management India
Web Solution Tools.
Mrf Web Design
Mrf Web Development
Mrf Web Development
CSS can be used locally by the readers of web pages to define colors, fonts, layout, and other aspects of document presentation. It is designed primarily to enable the separation of document content (written in HTML or a similar markup language) from document presentation (written in CSS). This separation can improve content accessibility, provide more flexibility and control in the specification of presentation characteristics, and reduce complexity and repetition in the structural content (such as by allowing for tableless web design). CSS can also allow the same markup page to be presented in different styles for different rendering methods, such as on-screen, in print, by voice (when read out by a speech-based browser or screen reader) and on Braille-based, tactile devices. CSS specifies a priority scheme to determine which style rules apply if more than one rule matches against a particular element. In this so-called cascade, priorities or weights are calculated and assigned to rules, so that the results are predictable.
The CSS specifications are maintained by the World Wide Web Consortium (W3C). Internet media type (MIME type) text/css is registered for use with CSS by RFC 2318 (March 1998)
Syntax
CSS has a simple syntax, and uses a number of English keywords to specify the names of various style properties.
A style sheet consists of a list of rules. Each rule or rule-set consists of one or more selectors and a declaration block. A declaration-block consists of a list of semicolon-separated declarations in braces. Each declaration itself consists of a property, a colon (:), a value, then a semi-colon (;).
In CSS, selectors are used to declare which elements a style applies to, a kind of match expression. Selectors may apply to all elements of a specific type, or only those elements which match a certain attribute; elements may be matched depending on how they are placed relative to each other in the markup code, or on how they are nested within the document object model.
In addition to these, a set of pseudo-classes can be used to define further behavior. Probably the best-known of these is :hover, which applies a style only when the user 'points to' the visible element, usually by holding the mouse cursor over it. It is appended to a selector as in a:hover or #elementid:hover. Other pseudo-classes and pseudo-elements are, for example, :first-line, :visited or :before. A special pseudo-class is :lang(c), "c".
A pseudo-class selects entire elements, such as :link or :visited, whereas a pseudo-element makes a selection that may consist of partial elements, such as :first-line or :first-letter.
Selectors may be combined in other ways too, especially in CSS 2.1, to achieve greater specificity and flexibility.
[edit] Use of CSS
Prior to CSS, nearly all of the presentational attributes of HTML documents were contained within the HTML markup; all font colors, background styles, element alignments, borders and sizes had to be explicitly described, often repeatedly, within the HTML. CSS allows authors to move much of that information to a separate stylesheet resulting in considerably simpler HTML markup.
Headings (h1 elements), sub-headings (h2), sub-sub-headings (h3), etc., are defined structurally using HTML. In print and on the screen, choice of font, size, color and emphasis for these elements is presentational.
Prior to CSS, document authors who wanted to assign such typographic characteristics to, say, all h2 headings had to use the HTML font and other presentational elements for each occurrence of that heading type. The additional presentational markup in the HTML made documents more complex, and generally more difficult to maintain. In CSS, presentation is separated from structure. In print, CSS can define color, font, text alignment, size, borders, spacing, layout and many other typographic characteristics. It can do so independently for on-screen and printed views. CSS also defines non-visual styles such as the speed and emphasis with which text is read out by aural text readers. The W3C now considers the advantages of CSS for defining all aspects of the presentation of HTML pages to be superior to other methods. It has therefore deprecated the use of all the original presentational HTML markup.
[edit] Sources
CSS information can be provided by various sources. CSS style information can be either attached as a separate document or embedded in the HTML document. Multiple style sheets can be imported. Different styles can be applied depending on the output device being used; for example, the screen version can be quite different from the printed version, so that authors can tailor the presentation appropriately for each medium.
* Author styles (style information provided by the web page author), in the form of
o external stylesheets, i.e. a separate CSS-file referenced from the document
o embedded style, blocks of CSS information inside the HTML document itself
o inline styles, inside the HTML document, style information on a single element, specified using the "style" attribute.
* User style
o a local CSS-file specified by the user using options in the web browser, and acting as an override, to be applied to all documents.
* User agent style
o the default style sheet applied by the user agent, e.g. the browser's default presentation of elements.
One of the goals of CSS is also to allow users a greater degree of control over presentation; those who find the red italic headings difficult to read may apply other style sheets to the document. Depending on their browser and the web site, a user may choose from various stylesheets provided by the designers, may remove all added style and view the site using their browser's default styling or may perhaps override just the red italic heading style without altering other attributes.
File highlightheaders.css containing:
h1 { color: white; background: orange !important; }
h2 { color: white; background: green !important; }
Such a file is stored locally and is applicable if that has been specified in the browser options. "!important" means that it prevails over the author specifications.
History
Style sheets have existed in one form or another since the beginnings of SGML in the 1970s. Cascading Style Sheets were developed as a means for creating a consistent approach to providing style information for web documents.
As HTML grew, it came to encompass a wider variety of stylistic capabilities to meet the demands of web developers. This evolution gave the designer more control over site appearance but at the cost of HTML becoming more complex to write and maintain. Variations in web browser implementations made consistent site appearance difficult, and users had less control over how web content was displayed.
To improve the capabilities of web presentation, nine different style sheet languages were proposed to the W3C's www-style mailing list. Of the nine proposals, two were chosen as the foundation for what became CSS: Cascading HTML Style Sheets (CHSS) and Stream-based Style Sheet Proposal (SSP). First, Håkon Wium Lie (now the CTO of Opera Software) proposed Cascading HTML Style Sheets (CHSS) in October 1994, a language which has some resemblance to today's CSS. Bert Bos was working on a browser called Argo which used its own style sheet language, Stream-based Style Sheet Proposal (SSP). Lie and Bos worked together to develop the CSS standard (the 'H' was removed from the name because these style sheets could be applied to other markup languages besides HTML).
Unlike existing style languages like DSSSL and FOSI, CSS allowed a document's style to be influenced by multiple style sheets. One style sheet could inherit or "cascade" from another, permitting a mixture of stylistic preferences controlled equally by the site designer and user.
Håkon's proposal was presented at the "Mosaic and the Web" conference in Chicago, Illinois in 1994, and again with Bert Bos in 1995. Around this time, the World Wide Web Consortium was being established; the W3C took an interest in the development of CSS, and it organized a workshop toward that end chaired by Steven Pemberton. This resulted in W3C adding work on CSS to the deliverables of the HTML editorial review board (ERB). Håkon and Bert were the primary technical staff on this aspect of the project, with additional members, including Thomas Reardon of Microsoft, participating as well. By the end of 1996, CSS was ready to become official, and the CSS level 1 Recommendation was published in December.
Development of HTML, CSS, and the DOM had all been taking place in one group, the HTML Editorial Review Board (ERB). Early in 1997, the ERB was split into three working groups: HTML Working group, chaired by Dan Connolly of W3C; DOM Working group, chaired by Lauren Wood of SoftQuad; and CSS Working group, chaired by Chris Lilley of W3C.
The CSS Working Group began tackling issues that had not been addressed with CSS level 1, resulting in the creation of CSS level 2 on November 4, 1997. It was published as a W3C Recommendation on May 12, 1998. CSS level 3, which was started in 1998, is still under development as of 2008.
In 2005 the CSS Working Groups decided to enforce the requirements for standards more strictly. This meant that already published standards like CSS 2.1, CSS 3 Selectors and CSS 3 Text were pulled back from Candidate Recommendation to Working Draft level.
[edit] Difficulty with adoption
Although the CSS1 specification was completed in 1996 and Microsoft's Internet Explorer 3 was released in that year featuring some limited support for CSS, it would be more than three years before any web browser achieved near-full implementation of the specification. Internet Explorer 5.0 for the Macintosh, shipped in March 2000, was the first browser to have full (better than 99 percent) CSS1 support[citation needed], surpassing Opera, which had been the leader since its introduction of CSS support 15 months earlier. Other browsers followed soon afterwards, and many of them additionally implemented parts of CSS2. As of July 2008, no (finished) browser has fully implemented CSS2, with implementation levels varying (see Comparison of layout engines (CSS)).
Even though early browsers such as Internet Explorer 3 and 4, and Netscape 4.x had support for CSS, it was typically incomplete and afflicted with serious bugs. This was a serious obstacle for the adoption of CSS.
When later 'version 5' browsers began to offer a fairly full implementation of CSS, they were still incorrect in certain areas and were fraught with inconsistencies, bugs and other quirks. The proliferation of such CSS-related inconsistencies and even the variation in feature support has made it difficult for designers to achieve a consistent appearance across platforms. Some authors commonly resort to using some workarounds such as CSS hacks and CSS filters in order to obtain consistent results across web browsers and platforms.
Problems with browsers' patchy adoption of CSS along with errata in the original specification led the W3C to revise the CSS2 standard into CSS2.1, which may be regarded as something nearer to a working snapshot of current CSS support in HTML browsers. Some CSS2 properties which no browser had successfully implemented were dropped, and in a few cases, defined behaviours were changed to bring the standard into line with the predominant existing implementations. CSS2.1 became a Candidate Recommendation on February 25, 2004, but css-21 was pulled back to Working Draft status on June 13, 2005,and only returned to Candidate Recommendation status on July 19, 2007.
In the past, some web servers were configured to serve all documents with the filename extension .css as mime type application/x-pointplus rather than text/css. At the time, the Net-Scene company was selling PointPlus Maker to convert PowerPoint files into Compact Slide Show files (using a .css extension).[citation needed]
Variations
CSS has various levels and profiles. Each level of CSS builds upon the last, typically adding new features and typically denoted as CSS1, CSS2, and CSS3. Profiles are typically a subset of one or more levels of CSS built for a particular device or user interface. Currently there are profiles for mobile devices, printers, and television sets. Profiles should not be confused with media types which were added in CSS2.
CSS 1
The first CSS specification to become an official W3C Recommendation is CSS level 1, published in December 1996.[5] Among its capabilities are support for:
* Font properties such as typeface and emphasis
* Color of text, backgrounds, and other elements
* Text attributes such as spacing between words, letters, and lines of text
* Alignment of text, images, tables and other elements
* Margin, border, padding, and positioning for most elements
* Unique identification and generic classification of groups of attributes
The W3C maintains the CSS1 Recommendation.
CSS 2
CSS level 2 was developed by the W3C and published as a Recommendation in May 1998. A superset of CSS1, CSS2 includes a number of new capabilities like absolute, relative, and fixed positioning of elements, the concept of media types, support for aural style sheets and bidirectional text, and new font properties such as shadows. The W3C maintains the CSS2 Recommendation.
CSS level 2 revision 1 or CSS 2.1 fixes errors in CSS2, removes poorly-supported features and adds already-implemented browser extensions to the specification. While it was a Candidate Recommendation for several months, on June 15, 2005 it was reverted to a working draft for further review. It was returned to Candidate Recommendation status on 19 July 2007.
CSS 3
CSS level 3 is currently under development.[9] The W3C maintains a CSS3 progress report. CSS3 is modularized and will consist of several separate Recommendations. The W3C CSS3 Roadmap provides a summary and introduction.
[edit] Browser support
A CSS filter is a coding technique that aims to effectively hide or show parts of the CSS to different browsers, either by exploiting CSS-handling quirks or bugs in the browser, or by taking advantage of lack of support for parts of the CSS specifications. Using CSS filters, some designers have gone as far as delivering entirely different CSS to certain browsers in order to ensure that designs are rendered as expected. Because very early web browsers were either completely incapable of handling CSS, or render CSS very poorly, designers today often routinely use CSS filters that completely prevent these browsers from accessing any of the CSS. Internet Explorer support for CSS began with IE 3.0 and increased progressively with each version. By 2008, the first Beta of Internet Explorer 8 offered support for CSS 2.1 in its best web standards mode.
An example of a well-known CSS browser bug is the Internet Explorer box model bug, where box widths are interpreted incorrectly in several versions of the browser, resulting in blocks which are too narrow when viewed in Internet Explorer, but correct in standards-compliant browsers. The bug can be avoided in Internet Explorer 6 by using the correct doctype in (X)HTML documents. CSS hacks and CSS filters are used to compensate for bugs such as this, just one of hundreds of CSS bugs that have been documented in various versions of Netscape, Mozilla Firefox, Opera, and Internet Explorer (including Internet Explorer 7).
Even when the availability of CSS-capable browsers made CSS a viable technology, the adoption of CSS was still held back by designers' struggles with browsers' incorrect CSS implementation and patchy CSS support. Even today, these problems continue to make the business of CSS design more complex and costly than it should be, and cross-browser testing remains a necessity. Other reasons for continuing non-adoption of CSS are: its perceived complexity, authors' lack of familiarity with CSS syntax and required techniques, poor support from authoring tools, the risks posed by inconsistency between browsers and the increased costs of testing.
Currently there is strong competition between Mozilla's Gecko layout engine, the WebKit layout engine used in Apple's Safari, the similar KHTML engine used in KDE's Konqueror browser, and Opera's Presto layout engine - each of them is leading in different aspects of CSS. As of 2007, Internet Explorer's Trident engine remains the worst at rendering CSS as judged by World Wide Web Consortium standards. [14] [15] In April 2008 Internet Explorer 8 beta fixes many of these shortcomings and renders CSS 2.1.[citation needed] The IEBlog claims that it passes some versions of the ACID2 test.
Web Management India
Web Solution Tools.
Mrf Web Design
Mrf Web Development
Mrf Web Development
Posted by
Raj Vaishnaw
comments (0)
XMLHttpRequest (XHR) is a DOM API that can be used by JavaScript and other web browser scripting languages to transfer XML and other text data between a web server and a browser. This type of AJAX architecture should not be confused with (XDR) XMLDomainRequest which is a lightweight form of XMLHttpRequest design by Microsoft which doesn't utilize XML-RPC.
The data returned from XMLHttpRequest calls will often be provided by back-end databases. Besides XML, XMLHttpRequest can be used to fetch data in other formats such as HTML, JSON or plain text.
XMLHttpRequest is an important part of the Ajax web development technique, and it is used by many websites to implement responsive and dynamic web applications. Examples of web applications that make use of XMLHttpRequest include Google Maps, Windows Live's Virtual Earth, the MapQuest dynamic map interface, Facebook, and many others.
Methods
abort()
Cancels the current request.
getAllResponseHeaders()
Returns the complete set of HTTP headers as a string.
getResponseHeader(headerName)
Returns the value of the specified HTTP header.
open(method, URL)
open(method, URL, async)
open(method, URL, async, userName)
open(method, URL, async, userName, password)
Specifies the method, URL, and other optional attributes of a request.
* The method parameter can have a value of GET, POST, HEAD, PUT, DELETE, or a variety of other HTTP methods listed in the W3C specification.[2]
* The URL parameter may be either a relative or complete URL.
* The async parameter specifies whether the request should be handled asynchronously or not – true means that script processing carries on after the send() method, without waiting for a response, and false means that the script waits for a response before continuing script processing.
send(content)
Sends the request. content can be a string or reference to a document.
setRequestHeader(label, value)
Adds a label/value pair to the HTTP header to be sent.
Properties
onreadystatechange
Specifies a reference to an event handler for an event that fires at every state change
readyState
Returns the state of the object as follows:
* 0 = uninitialized – open() has not yet been called.
* 1 = open – send() has not yet been called.
* 2 = sent – send() has been called, headers and status are available.
* 3 = receiving – Downloading, responseText holds partial data (although this functionality is not available in IE [3])
* 4 = loaded – Done.
responseText
Returns the response as a string.
responseXML
Returns the response as XML. This property returns an XML document object, which can be examined and parsed using W3C DOM node tree methods and properties.
responseBody
Returns the response as a binary encoded string. This property is not part of the native XMLHttpRequest wrapper. For this property to be available, the XHR object must be created with an ActiveX component. A JScript example:
if(typeof ActiveXObject != "undefined") {
xmlhttp = new ActiveXObject("MSXML2.XMLHTTP");
xmlhttp.open("GET", "#", false);
xmlhttp.send(null);
alert(xmlhttp.responseBody);
} else {
alert("This browser does not support Microsoft ActiveXObjects.")
}
status
Returns the HTTP status code as a number (e.g. 404 for "Not Found" and 200 for "OK"). Some network-related status codes (e.g. 408 for "Request Timeout") cause errors to be thrown in Firefox if the status fields are accessed.[4] If the server does not respond (properly), IE returns a WinInet Error Code (e.g 12029 for "cannot connect").
statusText
Returns the status as a string (e.g. "Not Found" or "OK").
History and support
The XMLHttpRequest concept was originally developed by Microsoft as a server side API call for Outlook Web Access 2000. At the time, it was not a standards-based web client feature; however, de facto support for it was implemented by many major web browsers. The Microsoft implementation is called XMLHTTP. It has been available since the introduction of Internet Explorer 5.0[5] and is accessible via JScript, VBScript and other scripting languages supported by IE browsers. In JScript, Internet Explorer versions prior to 7.0 require XMLHTTP to be invoked as an ActiveXObject, so they cannot instantiate the XMLHttpRequest class without help as follows:
// Provide the XMLHttpRequest class for IE 5.x-6.x:
if( typeof XMLHttpRequest == "undefined" ) XMLHttpRequest = function() {
try { return new ActiveXObject("Msxml2.XMLHTTP.6.0") } catch(e) {}
try { return new ActiveXObject("Msxml2.XMLHTTP.3.0") } catch(e) {}
try { return new ActiveXObject("Msxml2.XMLHTTP") } catch(e) {}
try { return new ActiveXObject("Microsoft.XMLHTTP") } catch(e) {}
throw new Error( "This browser does not support XMLHttpRequest." )
};
The Mozilla project incorporated the first compatible native implementation of XMLHttpRequest in Mozilla 1.0 in 2002. This implementation was later followed by Apple since Safari 1.2, Konqueror, Opera Software since Opera 8.0, iCab since 3.0b352, and Microsoft since Internet Explorer 7.0.
The World Wide Web Consortium published a Working Draft specification for the XMLHttpRequest object's API on 15 April 2008.[6] Its goal is "to document a minimum set of interoperable features based on existing implementations, allowing Web developers to use these features without platform-specific code". The draft specification is based upon existing popular implementations, to help improve and ensure interoperability of code across web platforms. As of April 2008, the W3C standard was still a work in progress.
On 25 February 2008 published a Working Draft of XMLHttpRequest2 object's API with some extended functionality (such as progress events, cross-site requests, handling of byte streams).
Web pages that use XMLHttpRequest or XMLHTTP can mitigate the current minor differences in the implementations either by encapsulating the XMLHttpRequest object in a JavaScript wrapper, or by using an existing framework that does so. In either case, the wrapper should detect the abilities of current implementation and work within its requirements.
Traditionally, there have been other methods to achieve a similar effect of server dynamic applications using scripting languages and/or plugins:
* Invisible IFrames (or Netscape Navigator's equivalent ilayers)
* Remote Scripting
* Netscape's LiveConnect
* Microsoft's ActiveX
* Microsoft's XML Data Islands
* Adobe Flash Player
* Java applets
In addition, the World Wide Web Consortium has several recommendations that also allow for dynamic communication between a server and user agent, though few of them are well supported.[citation needed] These include:
* The object element defined in HTML 4 for embedding arbitrary content types into documents, (replaces inline frames under XHTML 1.1)
* The Document Object Model (DOM) Level 3 Load and Save Specification.
Example Code
Here is a cross-browser general purpose example of an AJAX/XMLHttpRequest JavaScript function. See the history and support section for compatibility with versions of Internet Explorer prior to 7.0.
function ajax(url, vars, callbackFunction) {
var request = new XMLHttpRequest();
request.open("POST", url, true);
request.setRequestHeader("Content-Type",
"application/x-www-form-urlencoded");
request.onreadystatechange = function() {
var done = 4, ok = 200;
if (request.readyState == done && request.status == ok) {
if (request.responseText) {
callbackFunction(request.responseText);
}
}
};
request.send(vars);
}
[edit] Known problems
Caching
Most of the implementations also realize HTTP caching. Internet Explorer and Firefox do, but there is a difference in how and when the cached data is revalidated. Firefox revalidates the cached response every time the page is refreshed, issuing an "If-Modified-Since" header with value set to the value of the "Last-Modified" header of the cached response.
Internet Explorer does so only if the cached response is expired (i.e., after the date of received "Expires" header). This raises some issues, since a bug exists in Internet Explorer, where the cached response is never refreshed.[9]
It is possible to unify the caching behavior on the client. The following script illustrates an example approach (See the history and support section for compatibility with versions of Internet Explorer prior to 7.0):
var request = new XMLHttpRequest();
request.open("GET", url, false);
request.send(null);
if(!request.getResponseHeader("Date")) {
var cached = request;
request = new XMLHttpRequest();
var ifModifiedSince = cached.getResponseHeader("Last-Modified");
ifModifiedSince = (ifModifiedSince) ?
ifModifiedSince : new Date(0); // January 1, 1970
request.open("GET", url, false);
request.setRequestHeader("If-Modified-Since", ifModifiedSince);
request.send("");
if(request.status == 304) {
request = cached;
}
}
In Internet Explorer, if the response is returned from the cache without revalidation, the "Date" header is an empty string. The workaround is achieved by checking the "Date" response header and issuing another request if needed. In case a second request is needed, the actual HTTP request is not made twice, as the first call would not produce an actual HTTP request.
The reference to the cached request is preserved, because if the response code/status of the second call is "304 Not Modified", the response body becomes an empty string ("") and then it is needed to go back to the cached object. A way to save memory and expenses of second object creation is to preserve just the needed response data and reuse the XMLHttpRequest object.
The above script relies on the assumption that the "Date" header is always issued by the server, which should be true for most server configurations. Also, it illustrates a synchronous communication between the server and the client. In case of asynchronous communication, the check should be made during the callback.
This problem is often overcome by employing techniques preventing the caching at all. Using these techniques indiscriminately can result in poor performance and waste of network bandwidth.
If script executes operation that has side effects (e.g. adding a comment, marking message as read) which requires that request always reaches the end server, it should use POST method instead.
Workaround
Internet Explorer will also cache dynamic pages, this is a problem because the URL of the page may not change but the content will (For example a news feed). A work around for this situation can be achieved by adding a unique time stamp or random number, or possibly both, typically using the Date object and/or Math.random().
For simple document request the query string delimiter '?' can be used, or for existing queries a final sub-query can be added after a final '&' – to append the unique query term to the existing query. The downside is that each such request will fill up the cache with useless (never reused) content that could otherwise be used for other cached content (more useful data will be purged from cache to make room for these one-time responses).
Reusing XMLHttpRequest Object in IE
In IE, if the open method is called after setting the onreadystatechange callback, there will be a problem when trying to reuse the XHR object. To be able to reuse the XHR object properly, use the open method first and set onreadystatechange later. This happens because IE resets the object implicitly in the open method if the status is 'completed'. For more explanation of reuse: Reusing XMLHttpRequest Object in IE. The downside to calling the open method after setting the callback is a loss of cross-browser support for readystates. See the quirksmode article.
Cross-browser support
Microsoft developers were the first to include the XMLHttp object in their MSXML ActiveX control. Developers at the open source Mozilla project saw this invention and ported their own XMLHttp, not as an ActiveX control but as a native browser object called XMLHttpRequest. Konqueror, Opera and Safari have since implemented similar functionality but more along the lines of an identical XMLHttpRequest. Some Ajax developer and run-time frameworks only support one implementation of XMLHttp while others support both. Developers building Ajax functionality from scratch can provide if/else logic within their client-side JavaScript to use the appropriate XMLHttp object as well. Internet Explorer 7 added native support for the XMLHttpRequest object, but retains backward compatibility with the ActiveX implementation.[5] To avoid excess conditionals, see the source code in the history and support section.
MRF Web Technologies
Mrf Web Design
Web Management India
Web Design Devlopment
Web Solution Tools
The data returned from XMLHttpRequest calls will often be provided by back-end databases. Besides XML, XMLHttpRequest can be used to fetch data in other formats such as HTML, JSON or plain text.
XMLHttpRequest is an important part of the Ajax web development technique, and it is used by many websites to implement responsive and dynamic web applications. Examples of web applications that make use of XMLHttpRequest include Google Maps, Windows Live's Virtual Earth, the MapQuest dynamic map interface, Facebook, and many others.
Methods
abort()
Cancels the current request.
getAllResponseHeaders()
Returns the complete set of HTTP headers as a string.
getResponseHeader(headerName)
Returns the value of the specified HTTP header.
open(method, URL)
open(method, URL, async)
open(method, URL, async, userName)
open(method, URL, async, userName, password)
Specifies the method, URL, and other optional attributes of a request.
* The method parameter can have a value of GET, POST, HEAD, PUT, DELETE, or a variety of other HTTP methods listed in the W3C specification.[2]
* The URL parameter may be either a relative or complete URL.
* The async parameter specifies whether the request should be handled asynchronously or not – true means that script processing carries on after the send() method, without waiting for a response, and false means that the script waits for a response before continuing script processing.
send(content)
Sends the request. content can be a string or reference to a document.
setRequestHeader(label, value)
Adds a label/value pair to the HTTP header to be sent.
Properties
onreadystatechange
Specifies a reference to an event handler for an event that fires at every state change
readyState
Returns the state of the object as follows:
* 0 = uninitialized – open() has not yet been called.
* 1 = open – send() has not yet been called.
* 2 = sent – send() has been called, headers and status are available.
* 3 = receiving – Downloading, responseText holds partial data (although this functionality is not available in IE [3])
* 4 = loaded – Done.
responseText
Returns the response as a string.
responseXML
Returns the response as XML. This property returns an XML document object, which can be examined and parsed using W3C DOM node tree methods and properties.
responseBody
Returns the response as a binary encoded string. This property is not part of the native XMLHttpRequest wrapper. For this property to be available, the XHR object must be created with an ActiveX component. A JScript example:
if(typeof ActiveXObject != "undefined") {
xmlhttp = new ActiveXObject("MSXML2.XMLHTTP");
xmlhttp.open("GET", "#", false);
xmlhttp.send(null);
alert(xmlhttp.responseBody);
} else {
alert("This browser does not support Microsoft ActiveXObjects.")
}
status
Returns the HTTP status code as a number (e.g. 404 for "Not Found" and 200 for "OK"). Some network-related status codes (e.g. 408 for "Request Timeout") cause errors to be thrown in Firefox if the status fields are accessed.[4] If the server does not respond (properly), IE returns a WinInet Error Code (e.g 12029 for "cannot connect").
statusText
Returns the status as a string (e.g. "Not Found" or "OK").
History and support
The XMLHttpRequest concept was originally developed by Microsoft as a server side API call for Outlook Web Access 2000. At the time, it was not a standards-based web client feature; however, de facto support for it was implemented by many major web browsers. The Microsoft implementation is called XMLHTTP. It has been available since the introduction of Internet Explorer 5.0[5] and is accessible via JScript, VBScript and other scripting languages supported by IE browsers. In JScript, Internet Explorer versions prior to 7.0 require XMLHTTP to be invoked as an ActiveXObject, so they cannot instantiate the XMLHttpRequest class without help as follows:
// Provide the XMLHttpRequest class for IE 5.x-6.x:
if( typeof XMLHttpRequest == "undefined" ) XMLHttpRequest = function() {
try { return new ActiveXObject("Msxml2.XMLHTTP.6.0") } catch(e) {}
try { return new ActiveXObject("Msxml2.XMLHTTP.3.0") } catch(e) {}
try { return new ActiveXObject("Msxml2.XMLHTTP") } catch(e) {}
try { return new ActiveXObject("Microsoft.XMLHTTP") } catch(e) {}
throw new Error( "This browser does not support XMLHttpRequest." )
};
The Mozilla project incorporated the first compatible native implementation of XMLHttpRequest in Mozilla 1.0 in 2002. This implementation was later followed by Apple since Safari 1.2, Konqueror, Opera Software since Opera 8.0, iCab since 3.0b352, and Microsoft since Internet Explorer 7.0.
The World Wide Web Consortium published a Working Draft specification for the XMLHttpRequest object's API on 15 April 2008.[6] Its goal is "to document a minimum set of interoperable features based on existing implementations, allowing Web developers to use these features without platform-specific code". The draft specification is based upon existing popular implementations, to help improve and ensure interoperability of code across web platforms. As of April 2008, the W3C standard was still a work in progress.
On 25 February 2008 published a Working Draft of XMLHttpRequest2 object's API with some extended functionality (such as progress events, cross-site requests, handling of byte streams).
Web pages that use XMLHttpRequest or XMLHTTP can mitigate the current minor differences in the implementations either by encapsulating the XMLHttpRequest object in a JavaScript wrapper, or by using an existing framework that does so. In either case, the wrapper should detect the abilities of current implementation and work within its requirements.
Traditionally, there have been other methods to achieve a similar effect of server dynamic applications using scripting languages and/or plugins:
* Invisible IFrames (or Netscape Navigator's equivalent ilayers)
* Remote Scripting
* Netscape's LiveConnect
* Microsoft's ActiveX
* Microsoft's XML Data Islands
* Adobe Flash Player
* Java applets
In addition, the World Wide Web Consortium has several recommendations that also allow for dynamic communication between a server and user agent, though few of them are well supported.[citation needed] These include:
* The object element defined in HTML 4 for embedding arbitrary content types into documents, (replaces inline frames under XHTML 1.1)
* The Document Object Model (DOM) Level 3 Load and Save Specification.
Example Code
Here is a cross-browser general purpose example of an AJAX/XMLHttpRequest JavaScript function. See the history and support section for compatibility with versions of Internet Explorer prior to 7.0.
function ajax(url, vars, callbackFunction) {
var request = new XMLHttpRequest();
request.open("POST", url, true);
request.setRequestHeader("Content-Type",
"application/x-www-form-urlencoded");
request.onreadystatechange = function() {
var done = 4, ok = 200;
if (request.readyState == done && request.status == ok) {
if (request.responseText) {
callbackFunction(request.responseText);
}
}
};
request.send(vars);
}
[edit] Known problems
Caching
Most of the implementations also realize HTTP caching. Internet Explorer and Firefox do, but there is a difference in how and when the cached data is revalidated. Firefox revalidates the cached response every time the page is refreshed, issuing an "If-Modified-Since" header with value set to the value of the "Last-Modified" header of the cached response.
Internet Explorer does so only if the cached response is expired (i.e., after the date of received "Expires" header). This raises some issues, since a bug exists in Internet Explorer, where the cached response is never refreshed.[9]
It is possible to unify the caching behavior on the client. The following script illustrates an example approach (See the history and support section for compatibility with versions of Internet Explorer prior to 7.0):
var request = new XMLHttpRequest();
request.open("GET", url, false);
request.send(null);
if(!request.getResponseHeader("Date")) {
var cached = request;
request = new XMLHttpRequest();
var ifModifiedSince = cached.getResponseHeader("Last-Modified");
ifModifiedSince = (ifModifiedSince) ?
ifModifiedSince : new Date(0); // January 1, 1970
request.open("GET", url, false);
request.setRequestHeader("If-Modified-Since", ifModifiedSince);
request.send("");
if(request.status == 304) {
request = cached;
}
}
In Internet Explorer, if the response is returned from the cache without revalidation, the "Date" header is an empty string. The workaround is achieved by checking the "Date" response header and issuing another request if needed. In case a second request is needed, the actual HTTP request is not made twice, as the first call would not produce an actual HTTP request.
The reference to the cached request is preserved, because if the response code/status of the second call is "304 Not Modified", the response body becomes an empty string ("") and then it is needed to go back to the cached object. A way to save memory and expenses of second object creation is to preserve just the needed response data and reuse the XMLHttpRequest object.
The above script relies on the assumption that the "Date" header is always issued by the server, which should be true for most server configurations. Also, it illustrates a synchronous communication between the server and the client. In case of asynchronous communication, the check should be made during the callback.
This problem is often overcome by employing techniques preventing the caching at all. Using these techniques indiscriminately can result in poor performance and waste of network bandwidth.
If script executes operation that has side effects (e.g. adding a comment, marking message as read) which requires that request always reaches the end server, it should use POST method instead.
Workaround
Internet Explorer will also cache dynamic pages, this is a problem because the URL of the page may not change but the content will (For example a news feed). A work around for this situation can be achieved by adding a unique time stamp or random number, or possibly both, typically using the Date object and/or Math.random().
For simple document request the query string delimiter '?' can be used, or for existing queries a final sub-query can be added after a final '&' – to append the unique query term to the existing query. The downside is that each such request will fill up the cache with useless (never reused) content that could otherwise be used for other cached content (more useful data will be purged from cache to make room for these one-time responses).
Reusing XMLHttpRequest Object in IE
In IE, if the open method is called after setting the onreadystatechange callback, there will be a problem when trying to reuse the XHR object. To be able to reuse the XHR object properly, use the open method first and set onreadystatechange later. This happens because IE resets the object implicitly in the open method if the status is 'completed'. For more explanation of reuse: Reusing XMLHttpRequest Object in IE. The downside to calling the open method after setting the callback is a loss of cross-browser support for readystates. See the quirksmode article.
Cross-browser support
Microsoft developers were the first to include the XMLHttp object in their MSXML ActiveX control. Developers at the open source Mozilla project saw this invention and ported their own XMLHttp, not as an ActiveX control but as a native browser object called XMLHttpRequest. Konqueror, Opera and Safari have since implemented similar functionality but more along the lines of an identical XMLHttpRequest. Some Ajax developer and run-time frameworks only support one implementation of XMLHttp while others support both. Developers building Ajax functionality from scratch can provide if/else logic within their client-side JavaScript to use the appropriate XMLHttp object as well. Internet Explorer 7 added native support for the XMLHttpRequest object, but retains backward compatibility with the ActiveX implementation.[5] To avoid excess conditionals, see the source code in the history and support section.
MRF Web Technologies
Mrf Web Design
Web Management India
Web Design Devlopment
Web Solution Tools
Posted by
Raj Vaishnaw
comments (0)
(RIAs) are web applications that have the features and functionality of traditional desktop applications. RIAs typically form a stateful client application with a separate services layer on the backend.
RIAs typically do the following:
* run in a web browser, or do not require software installation
* run locally in a secure environment called a sandbox
History
The term "rich Internet application" was introduced in a white paper of March 2002 by Macromedia (now merged into Adobe), though the concept had existed for a number of years earlier under names such as:
* Remote Scripting, by Microsoft, circa 1998
* X Internet, by Forrester Research in October 2000
* Rich (web) clients
* Rich web application
Traditional web applications centered all activity around a client-server architecture with a thin client. Under this system, all processing is done on the server, and the client is only used to display static (in this case HTML) content. The biggest drawback with this system is that all interaction with the application must pass through the server, which requires data to be sent to the server, the server to respond, and the page to be reloaded on the client with the response. By using a client side technology which can execute instructions on the client's computer, RIAs can circumvent this slow and synchronous loop for many user interactions. This difference is somewhat analogous to the difference between "terminal and mainframe" and client-server/fat client approaches.
Internet standards have evolved slowly and continually over time to accommodate these techniques, so it is hard to draw a strict line between what constitutes an RIA and what does not. But all RIAs share one characteristic: they introduce an intermediate layer of code, often called a client engine, between the user and the server. This client engine is usually downloaded as part of the instantiation of the application, and may be supplemented by further code downloads as use of the application progresses. The client engine acts as an extension of the browser, and usually takes over responsibility for rendering the application's user interface and for server communication.
What can be done in an RIA may be limited by the capabilities of the system used on the client. But in general, the client engine is programmed to perform application functions that its designer believes will enhance some aspect of the user interface, or improve its responsiveness when handling certain user interactions, compared to a standard Web browser implementation. Also, while simply adding a client engine does not force an application to depart from the normal synchronous pattern of interactions between browser and server, in most RIAs the client engine performs additional asynchronous communications with servers.
Benefits
Although developing applications to run in a web browser is a much more limiting, difficult, and intricate process than developing a regular desktop application, the efforts are often justified because:
* installation footprint is smaller -- overhead for updating and distributing the application is trivial, or significantly reduced compared to a desktop or OS native application
* updates/upgrades to new versions can be automatic or transparent to the end user
* users can use the application from any computer with an internet connection
* many tools exist to allow off-line use of applications, such as Adobe AIR, Google Gears, Curl, and other technologies
* most RIA techologies allow the user experience to be consistent, regardless of what operating system the client uses.
* web-based applications are generally less prone to viral infection than running an actual executable
Because RIAs employ a client-side engine to interact with the user, they are, when compared with "traditional" web applications:
* Richer. They can offer user-interface behaviors not obtainable using only the HTML widgets available to standard browser-based Web applications. This richer functionality may include anything that can be implemented in the technology being used on the client side, including drag and drop, using a slider to change data, calculations performed only by the client and which do not need to be sent back to the server, for example, a mortgage calculator.
* More responsive. The interface behaviors are typically much more responsive than those of a standard Web browser that must always interact with a remote server.
The most sophisticated examples of RIAs exhibit a look and feel approaching that of a desktop environment. Using a client engine can also produce other performance benefits:
* Client/Server balance. The demand for client and server computing resources is better balanced, so that the Web server need not be the workhorse that it is with a traditional Web application. This frees server resources, allowing the same server hardware to handle more client sessions concurrently.
* Asynchronous communication. The client engine can interact with the server without waiting for the user to perform an interface action such as clicking on a button or link. This allows the user to view and interact with the page asynchronously from the client engine's communication with the server. This option allows RIA designers to move data between the client and the server without making the user wait. Perhaps the most common application of this is pre-fetching data, in which an application anticipates a future need for certain data and downloads it to the client before the user requests it, thereby speeding up a subsequent response. Google Maps uses this technique to load adjacent map segments to the client before the user scrolls them into view.
* Network efficiency. The network traffic may also be significantly reduced because an application-specific client engine can be more intelligent than a standard Web browser when deciding what data needs to be exchanged with servers. This can speed up individual requests or responses because less data is being transferred for each interaction, and overall network load is reduced. However, over-use of asynchronous calls and pre-fetching techniques can neutralize or even reverse this potential benefit. Because the code cannot anticipate exactly what every user will do next, it is common for such techniques to download extra data, not all of which is actually needed, to many or all clients.
Shortcomings
Shortcomings and restrictions associated with RIAs are:
* Sandboxing. Because RIAs run within a sandbox, they have restricted access to system resources. If assumptions about access to resources are incorrect, RIAs may fail to operate correctly.
* Enabled scripting. JavaScript or another scripting language is often required. If the user has disabled active scripting in their browser, the RIA may not function properly, if at all.
* Client processing speed. To achieve platform independence, some RIAs use client-side scripts written in interpreted languages such as JavaScript, with a consequential loss of performance (a serious issue with mobile devices). This is not an issue with compiled client languages such as Java, where performance is comparable to that of traditional compiled languages, or with Flash, Curl, or Silverlight, in which the compiled code is run by a Flash, Curl or Silverlight plugin. Several web browser manufacturers have released or are working on new JavaScript engines to accelerate execution of JavaScript, for example TraceMonkey, the Javascript engine used in Mozilla Firefox, will be optimized using "Trace Trees" in version 3.1 of the browser, and Google Chrome web browser has the V8 JavaScript engine, which also accelerates JavaScript execution. These two technologies have both evolved due to the increasing usage of JavaScript and the growth of rich internet applications.
* Script download time. Although it does not have to be installed, the additional client-side intelligence (or client engine) of RIA applications needs to be delivered by the server to the client. While much of this is usually automatically cached it needs to be transferred at least once. Depending on the size and type of delivery, script download time may be unpleasantly long. RIA developers can lessen the impact of this delay by compressing the scripts, and by staging their delivery over multiple pages of an application.
* Loss of integrity. If the application-base is X/HTML, conflicts arise between the goal of an application (which naturally wants to be in control of its presentation and behaviour) and the goals of X/HTML (which naturally wants to give away control). The DOM interface for X/HTML makes it possible to create RIAs, but by doing so makes it impossible to guarantee correct function. Because an RIA client can modify the RIA's basic structure and override presentation and behaviour, it can cause failure of the application to work properly on the client side. Eventually, this problem could be solved by new client-side mechanisms that granted an RIA client more limited permission to modify only those resources within the scope of its application. (Standard software running natively does not have this problem because by definition a program automatically possesses all rights to all its allocated resources).
* Loss of visibility to search engines. Search engines may not be able to index the text content of the application.
* Dependence on an Internet connection. While the ideal network-enabled replacement for a desktop application would allow users to be "occasionally connected", wandering in and out of hot-spots or from office to office, special platforms (e.g. Adobe AIR, Google Gears, Curl) are required to allow off-line functionality for RIA applications.
* Dependence on Bandwidth. RIAs may not function as desired over low-bandwidth connection such as dial-up modems, thereby generating very poor user experience. However this is not a problem with broadband connections.
* Accessibility. There are a lot of known Web accessibility issues in RIAs, most notably the fact that screen readers have a hard time detecting dynamic changes (caused by JavaScript) in HTML content. The WAI-ARIA suite provides a solution for this problem (via Live Regions); as well as providing a way of adding critical role, state and property information to DHTML based user interfaces.
* No deployment. In general, rich internet applications cannot be deployed the way traditional desktop applications can be deployed. There are, however, some exceptions to this (e.g. Adobe AIR and Curl).
* Security Concerns. Several RIA frameworks, such as Adobe AIR, provide much more access to a user's local operating system than the older technologies they are built upon, such as Adobe Flash. This added power in the hands of developers creates the potential for RIA applications to enable traditional web vulnerabilities, like Cross Site Scripting, to be used to attack desktop users and possibly spread Malware.
Software development complications
The advent of RIA technologies has introduced considerable additional complexity into Web applications. Traditional Web applications built using only standard HTML, having a relatively simple software architecture and being constructed using a limited set of development options, are relatively easy to design and manage. For the person or organization using RIA technologies to deliver a Web application, their additional complexity makes them harder to design, test, measure, and support.
Use of RIA technology poses several new service level management (SLM) challenges, not all of which are completely solved today. SLM concerns are not always the focus of application developers, and are rarely if ever perceived by application users, but they are vital to the successful delivery of an online application. Aspects of the RIA architecture that complicate management processes are:
* RIA architecture breaks the Web page paradigm. Traditional Web applications can be viewed as a series of Web pages, each of which requires a distinct download, initiated by an HTTP GET request. This model has been characterized as the Web page paradigm. RIAs invalidate this model, introducing additional asynchronous server communications to support a more responsive user interface. In RIAs, the time to complete a page download may no longer correspond to something a user perceives as important, because (for example) the client engine may be prefetching some of the downloaded content for future use. New measurement techniques must be devised for RIAs, to permit reporting of response time quantities that reflect the user's experience. In the absence of standard tools that do this, RIA developers must instrument their application code to produce the measurement data needed for SLM.
* Asynchronous communication makes it harder to isolate performance problems. Paradoxically, actions taken to enhance application responsiveness also make it harder to measure, understand, report on, and manage responsiveness. Some RIAs do not issue any further HTTP GET requests from the browser after their first page, using asynchronous requests from the client engine to initiate all subsequent downloads. The RIA client engine may be programmed to continually download new content and refresh the display, or (in applications using the Comet approach) a server-side engine can keep pushing new content to the browser over a connection that never closes. In these cases, the concept of a "page download" is no longer applicable. These applications are commonly known as refreshless. These complications make it harder to measure and subdivide application response times, a fundamental requirement for problem isolation and service level management. Tools designed to measure traditional Web applications may -- depending on the details of the application and the tool -- report such applications either as a single Web page per HTTP request, or as an unrelated collection of server activities. Neither description reflects what is really happening at the application level.
* The client engine makes it harder to measure response time. For traditional Web applications, measurement software can reside either on the client machine or on a machine that is close to the server, provided that it can observe the flow of network traffic at the TCP and HTTP levels. Because these protocols are synchronous and predictable, a packet sniffer can read and interpret packet-level data, and infer the user’s experience of response time by tracking HTTP messages and the times of underlying TCP packets and acknowledgments. But the RIA architecture reduces the power of the packet sniffing approach, because the client engine breaks the communication between user and server into two separate cycles operating asynchronously -- a foreground (user-to-engine) cycle, and a background (engine-to-server) cycle. Both cycles are important, because neither stands alone; it is their relationship that defines application behavior. But that relationship depends only on the application design, which cannot (in general) be inferred by a measurement tool, especially one that can observe only one of the two cycles. Therefore the most complete RIA measurements can only be obtained using tools that reside on the client and observe both cycles.
Current status of development
RIAs are still in the early stages of development and user adoption.[citation needed] There are a number of restrictions and requirements that remain:
* Browser adoption: Many RIAs require modern web browsers in order to run. Some RIA platforms depend on advanced JavaScript engines in the browser if they use techniques such as XMLHttpRequest for client-server communication, and DOM Scripting and advanced CSS techniques to enable the rich user interface. Other RIA platforms require the inconvenience of a one-time installation of a plugin, but then have an advantage of running consistently across a wider range of browsers.
* Web standards: Differences between web browsers can make it difficult to write an RIA that will run across all major browsers. The consistency of the Java platform or other platforms (i.e. Adobe AIR, Curl, Silverlight) using a plugin and installed runtime environment makes this task much simpler.
* Development tools: Some Ajax Frameworks and products such as Curl, Adobe Flex and Microsoft Silverlight provide an integrated environment in which to build RIAs.
* Accessibility concerns: Additional interactivity may require technical approaches that limit applications' accessibility.
* User adoption: Users expecting standard web applications may find that some accepted browser functionality (such as the "Back" button) may have somewhat different or even undesired behavior.
[edit] RIA Related Topics
[edit] RIA with real-time push
Traditionally, web pages have been delivered to the client only when the client requested for it. For every client request, the browser initiates an HTTP connection to the web server, which then returns the data and the connection is closed. The drawback of this approach was that the page displayed was updated only when the user explicitly refreshes the page or moves to a new page. Since transferring entire pages can take a long time, refreshing pages can introduce a long latency.
Demand for localised usage of RIA
With the increasing adoption and improvement in broadband technologies, fewer users experience poor performance caused by remote latency. Furthermore one of the critical reasons for using an RIA is that many developers are looking for a language to serve up desktop applications that is not only desktop OS neutral but also installation and system issue free.
RIA running in the ubiquitous web browser is a potential candidate even when used standalone or over a LAN, with the required webserver functionalities hosted locally.
Client-side functionalities and development tools for RIA needed
With client-side functionalities like Javascript and DHTML, RIA can operate on top of a range of OS and webserver functionalities.
User interface languages
Instead of HTML/XHTML, new user interface markup languages can be used in RIAs. For instance, the Mozilla Foundation's XML-based user interface markup language XUL - this could be used in RIAs though it would be restricted to Mozilla-based browsers, since it is not a de facto or de jure standard. The W3C's Rich Web Clients Activity[4] has initiated a Web Application Formats Working Group whose mission includes the development of such standards [5]. The original DARPA project at MIT which resulted in the W3C also resulted in the web content/programming language Curl which is now in version 6.0.
RIA's user interfaces can also become richer through the use of scriptable scalable vector graphics (though not all browsers can render those natively yet) as well as Synchronized Multimedia Integration Language (SMIL).
Other techniques
RIAs could use XForms to enhance their functionality.
Using XML and XSLT along with some XHTML, CSS and JavaScript can also be used to generate richer client side UI components like data tables that can be resorted locally on the client without going back to the server. Mozilla and Internet Explorer browsers both support this.
The Omnis Web Client is an ActiveX control or Netscape plug-in which can be embedded into an HTML page providing a rich application interface in the end-user's web browser.
Web Management India
Web Solution Tools.
Mrf Web Design
Mrf Web Development
Mrf Web Development
RIAs typically do the following:
* run in a web browser, or do not require software installation
* run locally in a secure environment called a sandbox
History
The term "rich Internet application" was introduced in a white paper of March 2002 by Macromedia (now merged into Adobe), though the concept had existed for a number of years earlier under names such as:
* Remote Scripting, by Microsoft, circa 1998
* X Internet, by Forrester Research in October 2000
* Rich (web) clients
* Rich web application
Traditional web applications centered all activity around a client-server architecture with a thin client. Under this system, all processing is done on the server, and the client is only used to display static (in this case HTML) content. The biggest drawback with this system is that all interaction with the application must pass through the server, which requires data to be sent to the server, the server to respond, and the page to be reloaded on the client with the response. By using a client side technology which can execute instructions on the client's computer, RIAs can circumvent this slow and synchronous loop for many user interactions. This difference is somewhat analogous to the difference between "terminal and mainframe" and client-server/fat client approaches.
Internet standards have evolved slowly and continually over time to accommodate these techniques, so it is hard to draw a strict line between what constitutes an RIA and what does not. But all RIAs share one characteristic: they introduce an intermediate layer of code, often called a client engine, between the user and the server. This client engine is usually downloaded as part of the instantiation of the application, and may be supplemented by further code downloads as use of the application progresses. The client engine acts as an extension of the browser, and usually takes over responsibility for rendering the application's user interface and for server communication.
What can be done in an RIA may be limited by the capabilities of the system used on the client. But in general, the client engine is programmed to perform application functions that its designer believes will enhance some aspect of the user interface, or improve its responsiveness when handling certain user interactions, compared to a standard Web browser implementation. Also, while simply adding a client engine does not force an application to depart from the normal synchronous pattern of interactions between browser and server, in most RIAs the client engine performs additional asynchronous communications with servers.
Benefits
Although developing applications to run in a web browser is a much more limiting, difficult, and intricate process than developing a regular desktop application, the efforts are often justified because:
* installation footprint is smaller -- overhead for updating and distributing the application is trivial, or significantly reduced compared to a desktop or OS native application
* updates/upgrades to new versions can be automatic or transparent to the end user
* users can use the application from any computer with an internet connection
* many tools exist to allow off-line use of applications, such as Adobe AIR, Google Gears, Curl, and other technologies
* most RIA techologies allow the user experience to be consistent, regardless of what operating system the client uses.
* web-based applications are generally less prone to viral infection than running an actual executable
Because RIAs employ a client-side engine to interact with the user, they are, when compared with "traditional" web applications:
* Richer. They can offer user-interface behaviors not obtainable using only the HTML widgets available to standard browser-based Web applications. This richer functionality may include anything that can be implemented in the technology being used on the client side, including drag and drop, using a slider to change data, calculations performed only by the client and which do not need to be sent back to the server, for example, a mortgage calculator.
* More responsive. The interface behaviors are typically much more responsive than those of a standard Web browser that must always interact with a remote server.
The most sophisticated examples of RIAs exhibit a look and feel approaching that of a desktop environment. Using a client engine can also produce other performance benefits:
* Client/Server balance. The demand for client and server computing resources is better balanced, so that the Web server need not be the workhorse that it is with a traditional Web application. This frees server resources, allowing the same server hardware to handle more client sessions concurrently.
* Asynchronous communication. The client engine can interact with the server without waiting for the user to perform an interface action such as clicking on a button or link. This allows the user to view and interact with the page asynchronously from the client engine's communication with the server. This option allows RIA designers to move data between the client and the server without making the user wait. Perhaps the most common application of this is pre-fetching data, in which an application anticipates a future need for certain data and downloads it to the client before the user requests it, thereby speeding up a subsequent response. Google Maps uses this technique to load adjacent map segments to the client before the user scrolls them into view.
* Network efficiency. The network traffic may also be significantly reduced because an application-specific client engine can be more intelligent than a standard Web browser when deciding what data needs to be exchanged with servers. This can speed up individual requests or responses because less data is being transferred for each interaction, and overall network load is reduced. However, over-use of asynchronous calls and pre-fetching techniques can neutralize or even reverse this potential benefit. Because the code cannot anticipate exactly what every user will do next, it is common for such techniques to download extra data, not all of which is actually needed, to many or all clients.
Shortcomings
Shortcomings and restrictions associated with RIAs are:
* Sandboxing. Because RIAs run within a sandbox, they have restricted access to system resources. If assumptions about access to resources are incorrect, RIAs may fail to operate correctly.
* Enabled scripting. JavaScript or another scripting language is often required. If the user has disabled active scripting in their browser, the RIA may not function properly, if at all.
* Client processing speed. To achieve platform independence, some RIAs use client-side scripts written in interpreted languages such as JavaScript, with a consequential loss of performance (a serious issue with mobile devices). This is not an issue with compiled client languages such as Java, where performance is comparable to that of traditional compiled languages, or with Flash, Curl, or Silverlight, in which the compiled code is run by a Flash, Curl or Silverlight plugin. Several web browser manufacturers have released or are working on new JavaScript engines to accelerate execution of JavaScript, for example TraceMonkey, the Javascript engine used in Mozilla Firefox, will be optimized using "Trace Trees" in version 3.1 of the browser, and Google Chrome web browser has the V8 JavaScript engine, which also accelerates JavaScript execution. These two technologies have both evolved due to the increasing usage of JavaScript and the growth of rich internet applications.
* Script download time. Although it does not have to be installed, the additional client-side intelligence (or client engine) of RIA applications needs to be delivered by the server to the client. While much of this is usually automatically cached it needs to be transferred at least once. Depending on the size and type of delivery, script download time may be unpleasantly long. RIA developers can lessen the impact of this delay by compressing the scripts, and by staging their delivery over multiple pages of an application.
* Loss of integrity. If the application-base is X/HTML, conflicts arise between the goal of an application (which naturally wants to be in control of its presentation and behaviour) and the goals of X/HTML (which naturally wants to give away control). The DOM interface for X/HTML makes it possible to create RIAs, but by doing so makes it impossible to guarantee correct function. Because an RIA client can modify the RIA's basic structure and override presentation and behaviour, it can cause failure of the application to work properly on the client side. Eventually, this problem could be solved by new client-side mechanisms that granted an RIA client more limited permission to modify only those resources within the scope of its application. (Standard software running natively does not have this problem because by definition a program automatically possesses all rights to all its allocated resources).
* Loss of visibility to search engines. Search engines may not be able to index the text content of the application.
* Dependence on an Internet connection. While the ideal network-enabled replacement for a desktop application would allow users to be "occasionally connected", wandering in and out of hot-spots or from office to office, special platforms (e.g. Adobe AIR, Google Gears, Curl) are required to allow off-line functionality for RIA applications.
* Dependence on Bandwidth. RIAs may not function as desired over low-bandwidth connection such as dial-up modems, thereby generating very poor user experience. However this is not a problem with broadband connections.
* Accessibility. There are a lot of known Web accessibility issues in RIAs, most notably the fact that screen readers have a hard time detecting dynamic changes (caused by JavaScript) in HTML content. The WAI-ARIA suite provides a solution for this problem (via Live Regions); as well as providing a way of adding critical role, state and property information to DHTML based user interfaces.
* No deployment. In general, rich internet applications cannot be deployed the way traditional desktop applications can be deployed. There are, however, some exceptions to this (e.g. Adobe AIR and Curl).
* Security Concerns. Several RIA frameworks, such as Adobe AIR, provide much more access to a user's local operating system than the older technologies they are built upon, such as Adobe Flash. This added power in the hands of developers creates the potential for RIA applications to enable traditional web vulnerabilities, like Cross Site Scripting, to be used to attack desktop users and possibly spread Malware.
Software development complications
The advent of RIA technologies has introduced considerable additional complexity into Web applications. Traditional Web applications built using only standard HTML, having a relatively simple software architecture and being constructed using a limited set of development options, are relatively easy to design and manage. For the person or organization using RIA technologies to deliver a Web application, their additional complexity makes them harder to design, test, measure, and support.
Use of RIA technology poses several new service level management (SLM) challenges, not all of which are completely solved today. SLM concerns are not always the focus of application developers, and are rarely if ever perceived by application users, but they are vital to the successful delivery of an online application. Aspects of the RIA architecture that complicate management processes are:
* RIA architecture breaks the Web page paradigm. Traditional Web applications can be viewed as a series of Web pages, each of which requires a distinct download, initiated by an HTTP GET request. This model has been characterized as the Web page paradigm. RIAs invalidate this model, introducing additional asynchronous server communications to support a more responsive user interface. In RIAs, the time to complete a page download may no longer correspond to something a user perceives as important, because (for example) the client engine may be prefetching some of the downloaded content for future use. New measurement techniques must be devised for RIAs, to permit reporting of response time quantities that reflect the user's experience. In the absence of standard tools that do this, RIA developers must instrument their application code to produce the measurement data needed for SLM.
* Asynchronous communication makes it harder to isolate performance problems. Paradoxically, actions taken to enhance application responsiveness also make it harder to measure, understand, report on, and manage responsiveness. Some RIAs do not issue any further HTTP GET requests from the browser after their first page, using asynchronous requests from the client engine to initiate all subsequent downloads. The RIA client engine may be programmed to continually download new content and refresh the display, or (in applications using the Comet approach) a server-side engine can keep pushing new content to the browser over a connection that never closes. In these cases, the concept of a "page download" is no longer applicable. These applications are commonly known as refreshless. These complications make it harder to measure and subdivide application response times, a fundamental requirement for problem isolation and service level management. Tools designed to measure traditional Web applications may -- depending on the details of the application and the tool -- report such applications either as a single Web page per HTTP request, or as an unrelated collection of server activities. Neither description reflects what is really happening at the application level.
* The client engine makes it harder to measure response time. For traditional Web applications, measurement software can reside either on the client machine or on a machine that is close to the server, provided that it can observe the flow of network traffic at the TCP and HTTP levels. Because these protocols are synchronous and predictable, a packet sniffer can read and interpret packet-level data, and infer the user’s experience of response time by tracking HTTP messages and the times of underlying TCP packets and acknowledgments. But the RIA architecture reduces the power of the packet sniffing approach, because the client engine breaks the communication between user and server into two separate cycles operating asynchronously -- a foreground (user-to-engine) cycle, and a background (engine-to-server) cycle. Both cycles are important, because neither stands alone; it is their relationship that defines application behavior. But that relationship depends only on the application design, which cannot (in general) be inferred by a measurement tool, especially one that can observe only one of the two cycles. Therefore the most complete RIA measurements can only be obtained using tools that reside on the client and observe both cycles.
Current status of development
RIAs are still in the early stages of development and user adoption.[citation needed] There are a number of restrictions and requirements that remain:
* Browser adoption: Many RIAs require modern web browsers in order to run. Some RIA platforms depend on advanced JavaScript engines in the browser if they use techniques such as XMLHttpRequest for client-server communication, and DOM Scripting and advanced CSS techniques to enable the rich user interface. Other RIA platforms require the inconvenience of a one-time installation of a plugin, but then have an advantage of running consistently across a wider range of browsers.
* Web standards: Differences between web browsers can make it difficult to write an RIA that will run across all major browsers. The consistency of the Java platform or other platforms (i.e. Adobe AIR, Curl, Silverlight) using a plugin and installed runtime environment makes this task much simpler.
* Development tools: Some Ajax Frameworks and products such as Curl, Adobe Flex and Microsoft Silverlight provide an integrated environment in which to build RIAs.
* Accessibility concerns: Additional interactivity may require technical approaches that limit applications' accessibility.
* User adoption: Users expecting standard web applications may find that some accepted browser functionality (such as the "Back" button) may have somewhat different or even undesired behavior.
[edit] RIA Related Topics
[edit] RIA with real-time push
Traditionally, web pages have been delivered to the client only when the client requested for it. For every client request, the browser initiates an HTTP connection to the web server, which then returns the data and the connection is closed. The drawback of this approach was that the page displayed was updated only when the user explicitly refreshes the page or moves to a new page. Since transferring entire pages can take a long time, refreshing pages can introduce a long latency.
Demand for localised usage of RIA
With the increasing adoption and improvement in broadband technologies, fewer users experience poor performance caused by remote latency. Furthermore one of the critical reasons for using an RIA is that many developers are looking for a language to serve up desktop applications that is not only desktop OS neutral but also installation and system issue free.
RIA running in the ubiquitous web browser is a potential candidate even when used standalone or over a LAN, with the required webserver functionalities hosted locally.
Client-side functionalities and development tools for RIA needed
With client-side functionalities like Javascript and DHTML, RIA can operate on top of a range of OS and webserver functionalities.
User interface languages
Instead of HTML/XHTML, new user interface markup languages can be used in RIAs. For instance, the Mozilla Foundation's XML-based user interface markup language XUL - this could be used in RIAs though it would be restricted to Mozilla-based browsers, since it is not a de facto or de jure standard. The W3C's Rich Web Clients Activity[4] has initiated a Web Application Formats Working Group whose mission includes the development of such standards [5]. The original DARPA project at MIT which resulted in the W3C also resulted in the web content/programming language Curl which is now in version 6.0.
RIA's user interfaces can also become richer through the use of scriptable scalable vector graphics (though not all browsers can render those natively yet) as well as Synchronized Multimedia Integration Language (SMIL).
Other techniques
RIAs could use XForms to enhance their functionality.
Using XML and XSLT along with some XHTML, CSS and JavaScript can also be used to generate richer client side UI components like data tables that can be resorted locally on the client without going back to the server. Mozilla and Internet Explorer browsers both support this.
The Omnis Web Client is an ActiveX control or Netscape plug-in which can be embedded into an HTML page providing a rich application interface in the end-user's web browser.
Web Management India
Web Solution Tools.
Mrf Web Design
Mrf Web Development
Mrf Web Development
Posted by
Raj Vaishnaw
comments (0)
An Ajax framework is a framework that helps to develop web applications that use Ajax, a collection of technologies used to build dynamic web pages on the client side. Data is read from the server or sent to the server by JavaScript requests. However, some processing at the server side may be required to handle requests, such as finding and storing the data. This is accomplished more easily with the use of a framework dedicated to process Ajax requests. The goal of the framework is to provide the Ajax engine described below and associated server and client-side functions.
Benefit of a framework
In the article that coined the "Ajax" term, J.J. Garrett describes the technology as "an intermediary...between the user and the server."[1] This Ajax engine is intended to suppress the delays perceived by the user when a page attempts to access the server. A framework eases the work of the Ajax programmer at two levels: on the client side, it offers JavaScript functions to send requests to the server. On the server side, it processes the requests, searches for the data, and transmits them to the browser. Some frameworks are very elaborate and provide a complete library to build web applications.
Types of frameworks
Ajax frameworks can be loosely grouped into categories according to the features they offer and the skills required of the user:
Direct Ajax frameworks
These frameworks require HTML, CSS and Ajax expertise: a developer is expected to author pages directly in HTML, and framework APIs deal directly with HTML elements. Cross-browser APIs are provided for a variety of purposes, commonly including communications, DOM manipulation, event handling, and sizing/moving/animating HTML elements.
These frameworks are generally smaller. They are commonly used for a web site such as a shopping experience, but not for a web application such as web-based email, at least not without further frameworks layered on top.
[edit] Ajax component frameworks
These frameworks offer pre-built components, such as tabbed panes, which automatically create and manage their own HTML. Components are generally created via JavaScript or XML tags, or by adding special attributes to normal HTML elements. These frameworks are generally larger, and intended for web applications rather than web sites.
Some component frameworks require the developer to have extensive HTML/CSS/Ajax experience and to do cross-browser testing. For example, grids, tabs, and buttons may be provided, but user input forms are expected to be authored directly in HTML/CSS and manipulated via Ajax techniques. Other frameworks provide a complete component suite such that only general XML and/or JavaScript abilities are required.
Ajax component frameworks can enable more rapid development than direct Ajax frameworks, but with less control, hence it is key that an Ajax component framework provides the following:
* customization APIs, e.g., an event that fires when the user stops editing within a grid
* skinning facilities, where appearance can be changed without affecting behavior or layout
* programmatic control, e.g., dynamically adding tabs or dynamically creating components based on user data
* extensibility—creation of new components based on other components, so that the benefits of a component-based framework are not lost
[edit] Server-driven Ajax frameworks
Several frameworks offer a server-side component-based development model with some degree of Ajax support.
Components are created and manipulated on the server using a server-side programming language. Pages are then rendered by a combination of server-side and client-side HTML generation and manipulation. User actions are communicated to the server via Ajax techniques, server-side code manipulates a server-side component model, and changes to the server component model are reflected on the client automatically.
These frameworks offer familiarity for server-side developers at the expense of some degree of power and performance. Ajax frameworks that handle presentation completely within the browser offer greater responsiveness because they handle many more user interactions without server involvement. In a server-driven model, some UI interactions can become chatty, for example an input field that is dynamically enabled or disabled based on server-side code may cause many network requests. Furthermore, server-dependent Ajax frameworks will never be able to offer offline support. Still, this approach is popular, especially in situations where the benefits of a full Ajax architecture can't be captured anyway.
Extending such a framework may require the developer to understand which parts of the presentation are handled on the client vs on the server, and to write a mixture of Ajax and server-side code.
Web Management India
Web Solution Tools.
Mrf Web Design
Mrf Web Development
Mrf Web Development
Benefit of a framework
In the article that coined the "Ajax" term, J.J. Garrett describes the technology as "an intermediary...between the user and the server."[1] This Ajax engine is intended to suppress the delays perceived by the user when a page attempts to access the server. A framework eases the work of the Ajax programmer at two levels: on the client side, it offers JavaScript functions to send requests to the server. On the server side, it processes the requests, searches for the data, and transmits them to the browser. Some frameworks are very elaborate and provide a complete library to build web applications.
Types of frameworks
Ajax frameworks can be loosely grouped into categories according to the features they offer and the skills required of the user:
Direct Ajax frameworks
These frameworks require HTML, CSS and Ajax expertise: a developer is expected to author pages directly in HTML, and framework APIs deal directly with HTML elements. Cross-browser APIs are provided for a variety of purposes, commonly including communications, DOM manipulation, event handling, and sizing/moving/animating HTML elements.
These frameworks are generally smaller. They are commonly used for a web site such as a shopping experience, but not for a web application such as web-based email, at least not without further frameworks layered on top.
[edit] Ajax component frameworks
These frameworks offer pre-built components, such as tabbed panes, which automatically create and manage their own HTML. Components are generally created via JavaScript or XML tags, or by adding special attributes to normal HTML elements. These frameworks are generally larger, and intended for web applications rather than web sites.
Some component frameworks require the developer to have extensive HTML/CSS/Ajax experience and to do cross-browser testing. For example, grids, tabs, and buttons may be provided, but user input forms are expected to be authored directly in HTML/CSS and manipulated via Ajax techniques. Other frameworks provide a complete component suite such that only general XML and/or JavaScript abilities are required.
Ajax component frameworks can enable more rapid development than direct Ajax frameworks, but with less control, hence it is key that an Ajax component framework provides the following:
* customization APIs, e.g., an event that fires when the user stops editing within a grid
* skinning facilities, where appearance can be changed without affecting behavior or layout
* programmatic control, e.g., dynamically adding tabs or dynamically creating components based on user data
* extensibility—creation of new components based on other components, so that the benefits of a component-based framework are not lost
[edit] Server-driven Ajax frameworks
Several frameworks offer a server-side component-based development model with some degree of Ajax support.
Components are created and manipulated on the server using a server-side programming language. Pages are then rendered by a combination of server-side and client-side HTML generation and manipulation. User actions are communicated to the server via Ajax techniques, server-side code manipulates a server-side component model, and changes to the server component model are reflected on the client automatically.
These frameworks offer familiarity for server-side developers at the expense of some degree of power and performance. Ajax frameworks that handle presentation completely within the browser offer greater responsiveness because they handle many more user interactions without server involvement. In a server-driven model, some UI interactions can become chatty, for example an input field that is dynamically enabled or disabled based on server-side code may cause many network requests. Furthermore, server-dependent Ajax frameworks will never be able to offer offline support. Still, this approach is popular, especially in situations where the benefits of a full Ajax architecture can't be captured anyway.
Extending such a framework may require the developer to understand which parts of the presentation are handled on the client vs on the server, and to write a mixture of Ajax and server-side code.
Web Management India
Web Solution Tools.
Mrf Web Design
Mrf Web Development
Mrf Web Development
Posted by
Raj Vaishnaw
comments (1)
AJAX" redirects here. For other uses, see Ajax.
Ajax (asynchronous JavaScript and XML), or AJAX, is a group of interrelated web development techniques used for creating interactive web applications or rich Internet applications. With Ajax, web applications can retrieve data from the server asynchronously in the background without interfering with the display and behavior of the existing page.[1] Data is retrieved using the XMLHttp object or through the use of Remote Scripting in browsers that do not support it. Despite the name, the use of JavaScript, XML, or its asynchronous use is not required.
History
While the term Ajax was coined in 2005, techniques for the asynchronous loading of content date back to 1996, when Internet Explorer introduced the IFrame element. Microsoft's Remote Scripting, introduced in 1998, acted as a more elegant replacement for these techniques, with data being pulled in by a Java applet with which the client side could communicate using JavaScript. In 1999, Microsoft created the XMLHttpRequest object as an ActiveX control in Internet Explorer 5, and developers of Mozilla and Safari followed soon after with native versions of the object. On April 5, 2006 the World Wide Web Consortium (W3C) released the first draft specification for the object in an attempt to create an official web standard.
Technologies
The term Ajax has come to represent a broad group of web technologies that can be used to implement a web application that communicates with a server in the background, without interfering with the current state of the page. In the article that coined the term Ajax,[3] Jesse James Garrett explained that it refers specifically to these technologies:
* XHTML and CSS for presentation
* the Document Object Model for dynamic display of and interaction with data
* XML and XSLT for the interchange and manipulation of data, respectively
* the XMLHttpRequest object for asynchronous communication
* JavaScript to bring these technologies together
Since then, however, there have been a number of developments in the technologies used in an Ajax application, and the definition of the term Ajax. In particular, it has been noted that:
* JavaScript is not the only client-side scripting language that can be used for implementing an Ajax application. Other languages such as VBScript are also capable of the required functionality.
* XML is not required for data interchange and therefore XSLT is not required for the manipulation of data. JavaScript Object Notation (JSON) is often used as an alternative format for data interchange, although other formats such as preformatted HTML or plain text can also be used.
Advantages
* In many cases, the pages on a website consist of much content that is common between them. Using traditional methods, that content would have to be reloaded on every request. However, using Ajax, a web application can request only the content that needs to be updated, thus drastically reducing bandwidth usage and load time.
* The use of asynchronous requests allows the client's Web browser UI to be more interactive and to respond quickly to inputs, and sections of pages can also be reloaded individually. Users may perceive the application to be faster or more responsive, even if the application has not changed on the server side.
* The use of Ajax can reduce connections to the server, since scripts and style sheets only have to be requested once.
Disadvantages
* Dynamically created pages do not register themselves with the browser's history engine, so clicking the browser's "back" button would not return the user to an earlier state of the Ajax-enabled page, but would instead return them to the last page visited before it. Workarounds include the use of invisible IFrames to trigger changes in the browser's history and changing the anchor portion of the URL (following a #) when AJAX is run and monitoring it for changes.
* Dynamic web page updates also make it difficult for a user to bookmark a particular state of the application. Solutions to this problem exist, many of which use the URL fragment identifier (the portion of a URL after the '#') to keep track of, and allow users to return to, the application in a given state.
* Because most web crawlers do not execute JavaScript code, web applications should provide an alternative means of accessing the content that would normally be retrieved with Ajax, to allow search engines to index it.
* Any user whose browser does not support Ajax or JavaScript, or simply has JavaScript disabled, will not be able to use its functionality.[12] Similarly, devices such as mobile phones, PDAs, and screen readers may not have support for JavaScript or the XMLHttpRequest object. Also, screen readers that are able to use Ajax may still not be able to properly read the dynamically generated content.
* The same origin policy prevents Ajax from being used across domains, although the W3C has a draft that would enable this functionality.
* The lack of a standards body behind Ajax means there is no widely adopted best practice to test Ajax applications. Testing tools for Ajax often do not understand Ajax event models, data models, and protocols.
* Opens up another attack vector for hackers that web developers might not fully test for.
Web Management India
Web Solution Tools.
Mrf Web Design
Mrf Web Development
Mrf Web Development
Ajax (asynchronous JavaScript and XML), or AJAX, is a group of interrelated web development techniques used for creating interactive web applications or rich Internet applications. With Ajax, web applications can retrieve data from the server asynchronously in the background without interfering with the display and behavior of the existing page.[1] Data is retrieved using the XMLHttp object or through the use of Remote Scripting in browsers that do not support it. Despite the name, the use of JavaScript, XML, or its asynchronous use is not required.
History
While the term Ajax was coined in 2005, techniques for the asynchronous loading of content date back to 1996, when Internet Explorer introduced the IFrame element. Microsoft's Remote Scripting, introduced in 1998, acted as a more elegant replacement for these techniques, with data being pulled in by a Java applet with which the client side could communicate using JavaScript. In 1999, Microsoft created the XMLHttpRequest object as an ActiveX control in Internet Explorer 5, and developers of Mozilla and Safari followed soon after with native versions of the object. On April 5, 2006 the World Wide Web Consortium (W3C) released the first draft specification for the object in an attempt to create an official web standard.
Technologies
The term Ajax has come to represent a broad group of web technologies that can be used to implement a web application that communicates with a server in the background, without interfering with the current state of the page. In the article that coined the term Ajax,[3] Jesse James Garrett explained that it refers specifically to these technologies:
* XHTML and CSS for presentation
* the Document Object Model for dynamic display of and interaction with data
* XML and XSLT for the interchange and manipulation of data, respectively
* the XMLHttpRequest object for asynchronous communication
* JavaScript to bring these technologies together
Since then, however, there have been a number of developments in the technologies used in an Ajax application, and the definition of the term Ajax. In particular, it has been noted that:
* JavaScript is not the only client-side scripting language that can be used for implementing an Ajax application. Other languages such as VBScript are also capable of the required functionality.
* XML is not required for data interchange and therefore XSLT is not required for the manipulation of data. JavaScript Object Notation (JSON) is often used as an alternative format for data interchange, although other formats such as preformatted HTML or plain text can also be used.
Advantages
* In many cases, the pages on a website consist of much content that is common between them. Using traditional methods, that content would have to be reloaded on every request. However, using Ajax, a web application can request only the content that needs to be updated, thus drastically reducing bandwidth usage and load time.
* The use of asynchronous requests allows the client's Web browser UI to be more interactive and to respond quickly to inputs, and sections of pages can also be reloaded individually. Users may perceive the application to be faster or more responsive, even if the application has not changed on the server side.
* The use of Ajax can reduce connections to the server, since scripts and style sheets only have to be requested once.
Disadvantages
* Dynamically created pages do not register themselves with the browser's history engine, so clicking the browser's "back" button would not return the user to an earlier state of the Ajax-enabled page, but would instead return them to the last page visited before it. Workarounds include the use of invisible IFrames to trigger changes in the browser's history and changing the anchor portion of the URL (following a #) when AJAX is run and monitoring it for changes.
* Dynamic web page updates also make it difficult for a user to bookmark a particular state of the application. Solutions to this problem exist, many of which use the URL fragment identifier (the portion of a URL after the '#') to keep track of, and allow users to return to, the application in a given state.
* Because most web crawlers do not execute JavaScript code, web applications should provide an alternative means of accessing the content that would normally be retrieved with Ajax, to allow search engines to index it.
* Any user whose browser does not support Ajax or JavaScript, or simply has JavaScript disabled, will not be able to use its functionality.[12] Similarly, devices such as mobile phones, PDAs, and screen readers may not have support for JavaScript or the XMLHttpRequest object. Also, screen readers that are able to use Ajax may still not be able to properly read the dynamically generated content.
* The same origin policy prevents Ajax from being used across domains, although the W3C has a draft that would enable this functionality.
* The lack of a standards body behind Ajax means there is no widely adopted best practice to test Ajax applications. Testing tools for Ajax often do not understand Ajax event models, data models, and protocols.
* Opens up another attack vector for hackers that web developers might not fully test for.
Web Management India
Web Solution Tools.
Mrf Web Design
Mrf Web Development
Mrf Web Development
Posted by
Raj Vaishnaw
comments (0)
Since the mid-1990s, web development has been one of the fastest growing industries in the world. In 1995 there were fewer than 1,000 web development companies in the United States alone, but by 2005 there were over 30,000 such companies.[1][citation needed] The web development industry is expected to grow over 20% by 2010. The growth of this industry is being pushed by large businesses wishing to sell products and services to their customers and to automate business workflow, as well as the growth of many small web design and development companies.
In addition, cost of Web site development and hosting has dropped dramatically during this time. Instead of costing tens of thousands of dollars, as was the case for early websites, one can now develop a simple web site for less than a thousand dollars, depending on the complexity and amount of content.[citation needed] Smaller Web site development companies are now able to make web design accessible to both smaller companies and individuals further fueling the growth of the web development industry. As far as web development tools and platforms are concerned, there are many systems available to the public free of charge to aid in development. A popular example is the LAMP (Linux, Apache, MySQL, PHP), which is usually distributed free of charge. This fact alone has manifested into many people around the globe setting up new Web sites daily and thus contributing to increase in web development popularity. Another contributing factor has been the rise of easy to use WYSIWYG web development software, most prominently Adobe Dreamweaver or Microsoft Expression Studio (formerly Microsoft Frontpage) . Using such software, virtually anyone can develop a Web page in a matter of minutes. Knowledge of HyperText Markup Language (HTML), or other programming languages is not required, but recommended for professional results.
The next generation of web development tools uses the strong growth in LAMP and Microsoft .NET technologies to provide the Web as a way to run applications online. Web developers now help to deliver applications as Web services which were traditionally only available as applications on a desk based computer.
Instead of running executable code on a local computer, users are interacting with online applications to create new content. This has created new methods in communication and allowed for many opportunities to decentralize information and media distribution. Users are now able to interact with applications from many locations, instead of being tied to a specific workstation for their application environment.
Examples of dramatic transformation in communication and commerce led by web development include e-commerce. Online auction sites such as eBay have changed the way consumers consume and purchase goods and services. Online resellers such as Amazon.com and Buy.com (among many, many others) have transformed the shopping and bargain hunting experience for many consumers. Another good example of transformative communication led by web development is the blog. Web applications such as WordPress and b2evolution have created easily implemented blog environments for individual Web sites. Open source content systems such as Typo3, Xoops, Joomla!, and Drupal have extended web development into new modes of interaction and communication.
Web Management India
Web Solution Tools.
Mrf Web Design
Mrf Web Development
Mrf Web Development
In addition, cost of Web site development and hosting has dropped dramatically during this time. Instead of costing tens of thousands of dollars, as was the case for early websites, one can now develop a simple web site for less than a thousand dollars, depending on the complexity and amount of content.[citation needed] Smaller Web site development companies are now able to make web design accessible to both smaller companies and individuals further fueling the growth of the web development industry. As far as web development tools and platforms are concerned, there are many systems available to the public free of charge to aid in development. A popular example is the LAMP (Linux, Apache, MySQL, PHP), which is usually distributed free of charge. This fact alone has manifested into many people around the globe setting up new Web sites daily and thus contributing to increase in web development popularity. Another contributing factor has been the rise of easy to use WYSIWYG web development software, most prominently Adobe Dreamweaver or Microsoft Expression Studio (formerly Microsoft Frontpage) . Using such software, virtually anyone can develop a Web page in a matter of minutes. Knowledge of HyperText Markup Language (HTML), or other programming languages is not required, but recommended for professional results.
The next generation of web development tools uses the strong growth in LAMP and Microsoft .NET technologies to provide the Web as a way to run applications online. Web developers now help to deliver applications as Web services which were traditionally only available as applications on a desk based computer.
Instead of running executable code on a local computer, users are interacting with online applications to create new content. This has created new methods in communication and allowed for many opportunities to decentralize information and media distribution. Users are now able to interact with applications from many locations, instead of being tied to a specific workstation for their application environment.
Examples of dramatic transformation in communication and commerce led by web development include e-commerce. Online auction sites such as eBay have changed the way consumers consume and purchase goods and services. Online resellers such as Amazon.com and Buy.com (among many, many others) have transformed the shopping and bargain hunting experience for many consumers. Another good example of transformative communication led by web development is the blog. Web applications such as WordPress and b2evolution have created easily implemented blog environments for individual Web sites. Open source content systems such as Typo3, Xoops, Joomla!, and Drupal have extended web development into new modes of interaction and communication.
Web Management India
Web Solution Tools.
Mrf Web Design
Mrf Web Development
Mrf Web Development
Posted by
Raj Vaishnaw
comments (0)
When Netscape Navigator 4 dominated the browser market, the popular solution available for designers to lay out a Web page was by using tables. Often even simple designs for a page would require dozens of tables nested in each other. Many web templates in Dreamweaver and other WYSIWYG editors still use this technique today. Navigator 4 didn't support CSS to a useful degree, so it simply wasn't used.
After the browser wars subsided, and the dominant browsers such as Internet Explorer became more W3C compliant, designers started turning toward CSS as an alternate means of laying out their pages. CSS proponents say that tables should be used only for tabular data, not for layout. Using CSS instead of tables also returns HTML to a semantic markup, which helps bots and search engines understand what's going on in a web page. All modern Web browsers support CSS with different degrees of limitations.
However, one of the main points against CSS is that by relying on it exclusively, control is essentially relinquished as each browser has its own quirks which result in a slightly different page display. This is especially a problem as not every browser supports the same subset of CSS rules. For designers who are used to table-based layouts, developing Web sites in CSS often becomes a matter of trying to replicate what can be done with tables, leading some to find CSS design rather cumbersome due to lack of familiarity. For example, at one time it was rather difficult to produce certain design elements, such as vertical positioning, and full-length footers in a design using absolute positions. With the abundance of CSS resources available online today, though, designing with reasonable adherence to ,;;' standards involves little more than applying CSS 2.1 or CSS 3 to properly structured markup.
These days most modern browsers have solved most of these quirks in CSS rendering and this has made many different CSS layouts possible. However, some people continue to use old browsers, and designers need to keep this in mind, and allow for graceful degrading of pages in older browsers. Most notable among these old browsers are Internet Explorer 5 and 5.5, which, according to some web designers, are becoming the new Netscape Navigator 4 — a block that holds the World Wide Web back from converting to CSS design. However, the W3 Consortium has made CSS in combination with XHTML the standard for web design.
Web Management India
Web Solution Tools.
Mrf Web Design
Mrf Web Development
Mrf Web Development
After the browser wars subsided, and the dominant browsers such as Internet Explorer became more W3C compliant, designers started turning toward CSS as an alternate means of laying out their pages. CSS proponents say that tables should be used only for tabular data, not for layout. Using CSS instead of tables also returns HTML to a semantic markup, which helps bots and search engines understand what's going on in a web page. All modern Web browsers support CSS with different degrees of limitations.
However, one of the main points against CSS is that by relying on it exclusively, control is essentially relinquished as each browser has its own quirks which result in a slightly different page display. This is especially a problem as not every browser supports the same subset of CSS rules. For designers who are used to table-based layouts, developing Web sites in CSS often becomes a matter of trying to replicate what can be done with tables, leading some to find CSS design rather cumbersome due to lack of familiarity. For example, at one time it was rather difficult to produce certain design elements, such as vertical positioning, and full-length footers in a design using absolute positions. With the abundance of CSS resources available online today, though, designing with reasonable adherence to ,;;' standards involves little more than applying CSS 2.1 or CSS 3 to properly structured markup.
These days most modern browsers have solved most of these quirks in CSS rendering and this has made many different CSS layouts possible. However, some people continue to use old browsers, and designers need to keep this in mind, and allow for graceful degrading of pages in older browsers. Most notable among these old browsers are Internet Explorer 5 and 5.5, which, according to some web designers, are becoming the new Netscape Navigator 4 — a block that holds the World Wide Web back from converting to CSS design. However, the W3 Consortium has made CSS in combination with XHTML the standard for web design.
Web Management India
Web Solution Tools.
Mrf Web Design
Mrf Web Development
Mrf Web Development
Posted by
Raj Vaishnaw
comments (0)
A web site is a collection of information about a particular topic or subject. Designing a web site is defined as the arrangement and creation of web pages that in turn make up a web site. A web page consists of information for which the web site is developed. A web site might be compared to a book, where each page of the book is a web page.
There are many aspects (design concerns) in this process, and due to the rapid development of the Internet, new aspects may emerge. For non-commercial web sites, the goals may vary depending on the desired exposure and response. For typical commercial web sites, the basic aspects of design are:
* The content: the substance, and information on the site should be relevant to the site and should target the area of the public that the website is concerned with.
* The usability: the site should be user-friendly, with the interface and navigation simple and reliable.
* The appearance: the graphics and text should include a single style that flows throughout, to show consistency. The style should be professional, appealing and relevant.
* The visibility: the site must also be easy to find via most, if not all, major search engines and advertisement media.
A web site typically consists of text and images. The first page of a web site is known as the Home page or Index. Some web sites use what is commonly called a Splash Page. Splash pages might include a welcome message, language or region selection, or disclaimer. Each web page within a web site is an HTML file which has its own URL. After each web page is created, they are typically linked together using a navigation menu composed of hyperlinks. Faster browsing speeds have led to shorter attention spans and more demanding online visitors and this has resulted in less use of Splash Pages, particularly where commercial web sites are concerned.
Once a web site is completed, it must be published or uploaded in order to be viewable to the public over the internet. This may be done using an FTP client. Once published, the web master may use a variety of techniques to increase the traffic, or hits, that the web site receives. This may include submitting the web site to a search engine such as Google or Yahoo, exchanging links with other web sites, creating affiliations with similar web sites, etc.
We Have Accecpt all type website design and development work
Web Management India
Web Solution Tools.
Mrf Web Design
Mrf Web Development
Mrf Web Development
There are many aspects (design concerns) in this process, and due to the rapid development of the Internet, new aspects may emerge. For non-commercial web sites, the goals may vary depending on the desired exposure and response. For typical commercial web sites, the basic aspects of design are:
* The content: the substance, and information on the site should be relevant to the site and should target the area of the public that the website is concerned with.
* The usability: the site should be user-friendly, with the interface and navigation simple and reliable.
* The appearance: the graphics and text should include a single style that flows throughout, to show consistency. The style should be professional, appealing and relevant.
* The visibility: the site must also be easy to find via most, if not all, major search engines and advertisement media.
A web site typically consists of text and images. The first page of a web site is known as the Home page or Index. Some web sites use what is commonly called a Splash Page. Splash pages might include a welcome message, language or region selection, or disclaimer. Each web page within a web site is an HTML file which has its own URL. After each web page is created, they are typically linked together using a navigation menu composed of hyperlinks. Faster browsing speeds have led to shorter attention spans and more demanding online visitors and this has resulted in less use of Splash Pages, particularly where commercial web sites are concerned.
Once a web site is completed, it must be published or uploaded in order to be viewable to the public over the internet. This may be done using an FTP client. Once published, the web master may use a variety of techniques to increase the traffic, or hits, that the web site receives. This may include submitting the web site to a search engine such as Google or Yahoo, exchanging links with other web sites, creating affiliations with similar web sites, etc.
We Have Accecpt all type website design and development work
Web Management India
Web Solution Tools.
Mrf Web Design
Mrf Web Development
Mrf Web Development