Search Results

Search found 56287 results on 2252 pages for 'build your web'.

Page 110/2252 | < Previous Page | 106 107 108 109 110 111 112 113 114 115 116 117  | Next Page >

  • php html flash website design tools for beginner

    - by LikeToCode
    I have 5 years of C#, Perl and SQL programming experience but I've just found a web designer and developer job. I dont know anything about it but need to design a website using PHP, flash and HTML, whatever it is. Can you give me pointers on where to start to learn it all ASAP and start designing the website? I downloaded WAMP and learned to configure it. Other then that I dont know what to do next. They gave me a few pictures to incorporate, but I dont know how :)

    Read the article

  • registering httpModules in web.config

    - by Praesagus
    I am trying to register a custom HttpHandler in the web.config file. MSDN's example shows an entry that is commented out...um which doesn't work so well. When I uncomment I receive a Could not load file or assembly 'HttpModule.cs' or one of its dependencies. The system cannot find the file specified. error. The file name is HttpModule.cs and the class name is MyHttpModule. There is no explicit namespace yet. <httpModules> <add name="MyHttpModule" type="MyHttpModule, HttpModule" /> <httpModules> I am sure I am overlooking the obvious. Do I need to specify the file path somewhere? Thanks in advance.

    Read the article

  • How to set up asp.net membership with a web application instead of a web project

    - by mwright
    Originally the site was set up using a Website project which ended up not working for various reasons. I'm trying to make it work as a web application project and have started from the ground up with a new project. I have looked online and not found a good resource that explains some of the "simple" things that are taken for granted when it's a website project. Some things specifically: How am I specifying the external sql database that the membership site should use? Is it possible to set privileges on a folder and require authentication when accessing that content or does each page need to check for itself? Once again, I'm looking for some resources I can use as I move forward as opposed to answers to specific questions (although those are welcome as well).

    Read the article

  • Why isn't Google Web Toolkit more popular?

    - by gerdemb
    I've recently become intrigued with Google Web Toolkit and have started playing with it on some personal projects. I've noticed though that it doesn't seem to be very popular. For example, two major freelancing job boards (www.elance.com and www.odesk.com) list no jobs for GWT and the list of projects using it on Google's official site is pretty slim http://code.google.com/webtoolkit/app_gallery.html (compare to Django projects for example http://www.djangosites.org/). This seems odd to me as GWT has been around since 2006 and is supported by the Google brand name. It also neatly solves the problem of creating cross-browser completely dynamic websites that I haven't seen possible with any other tool. So, why the lack of acceptance?

    Read the article

  • Web Page Rendering Capture

    - by Chaitanya
    I start with describing the problem itself. Rather than a problem I'm looking for a better solution. I have a asp.net page which has a bunch of images and a link underneath it, Each image is infact the latest rendering of the link underneath it. I scheduled a bat script which runs every hour to fetch the images through IECapt a web page rendering capture utility. One thing am annoyed about this utility is it takes a lot of time for the 20 images I have and for few because of the flash content it misses to take the actual screenshot of the website. Now I like to know can this rendering be done by traditional programming am not interested in using any utilities. I'm interested in trying this. The solution need not be necessarily a C# based am ready to try in any other language. Because it gives me a chance to learn. Thank you.

    Read the article

  • ISA Web Farm and WCF service hosted in a Windows Service with basicHttpBinding

    - by Ryan Pedersen
    I have created a WCF service that needs to be hosted in a Window Service because it is participating in a P2P mesh (NetPeerTcpBinding). When I tried to host the WCF Service with NetPeerTcpBinding endpoints in the IIS Service container the service wouldn't run because it turns out that the P2P binding doesn't work in IIS. I have exposed a HTTP endpoint from the WCF service hosted in a Windows Service container and I want to know if there is a way to create an ISA Web Farm that will route traffic to http endpoints on two machines each running the same WCF service in a Windows Service container.

    Read the article

  • [RESTful] Calculation with RESTful web service with MySQL database : )

    - by Dobby
    Dears, I am now making some RESTful web service with MySQL database. I used NetBeans to create the resources of RESTful service with MySQL, and now I can now using GET and POST/PUT to list and add/modify data entities in the MySQL server. Currently, I wish to make some calculations right after a client makes the POST activities, then the posted data with calculated results will be inserted into the MySQL database. I am very new to this, I guess I need to add some functions and call them to calculate but I don't know where and how to do that : ( Could any one help me on this issue? Thanks a lot in advance! : ) Best! Dobby

    Read the article

  • JAX-WS Consuming web service with WS-Security and WS-Addressing

    - by aurealus
    I'm trying to develop a standalone Java web service client with JAX-WS (Metro) that uses WS-Security with Username Token Authentication (Password digest, nonces and timestamp) and timestamp verification along with WS-Addressing over SSL. The WSDL I have to work with does not define any security policy information. I have been unable to figure out exactly how to add this header information (the correct way to do so) when the WSDL does not contain this information. Most examples I have found using Metro revolve around using Netbeans to automatically generate this from the WSDL which does not help me at all. I have looked into WSIT, XWSS, etc. without much clarity or direction. JBoss WS Metro looked promising not much luck yet there either. Anyone have experience doing this or have suggestions on how to accomplish this task? Even pointing me in the right direction would be helpful. I am not restricted to a specific technology other than it must be Java based.

    Read the article

  • Ninject : ninject.web - How to apply on a regular ASP.Net Web (!MVC)

    - by No Body
    What I am looking is something similar to the below (http://github.com/ninject/ninject.web.mvc): README.markdown This extension allows integration between the Ninject core and ASP.NET MVC projects. To use it, just make your HttpApplication (typically in Global.asax.cs) extend NinjectHttpApplication: public class YourWebApplication : NinjectHttpApplication { public override void OnApplicationStarted() { // This is only needed in MVC1 RegisterAllControllersIn("Some.Assembly.Name"); } public override IKernel CreateKernel() { return new StandardKernel(new SomeModule(), new SomeOtherModule(), ...); // OR, to automatically load modules: var kernel = new StandardKernel(); kernel.AutoLoadModules("~/bin"); return kernel; } } Once you do this, your controllers will be activated via Ninject, meaning you can expose dependencies on their constructors (or properties, or methods) to request injections.

    Read the article

  • Customize Your WordPress Blog & Build an Audience

    - by Matthew Guay
    Want to quickly give your blog a fresh coat of paint and make it stand out from the pack?  Here’s how you can customize your WordPress blog and make it uniquely yours. WordPress offers many features that help you make your blog the best it can be.  Although it doesn’t offer as many customization features as full WordPress running on your own server, it still makes it easy to make your free blog as professional or cute as you like.  Here we’ll look at how you can customize features in your blog and build an audience. Personalize Your Blog WordPress make it easy to personalize your blog.  Most of the personalization options are available under the Appearance menu on the left.  Here we’ll look at how you can use most of these. Add New Theme WordPress is popular for the wide range of themes available for it.  While you cannot upload your own theme to your blog, you can choose from over 90 free themes currently available with more added all the time.  To change your theme, select the Themes page under Appearance. The Themes page will show random themes, but you can choose to view them in alphabetical order, by popularity, or how recently they were added.  Or, you can search for a theme by name or features. One neat way to find a theme that suites your needs is the Feature Filter.  Click the link on the right of the search button, and then select the options you want to make sure your theme has.  Click Apply Filters and WordPress will streamline your choices to themes that contain these features. Once you find a theme you like, click Preview under its name to see how your blog will look. This will open a popup that shows your blog with the new theme.  Click the Activate link in the top right corner of the popup if you want to keep this theme; otherwise, click the x in the top left corner to close the preview and continue your search for one you want.   Edit Current Theme Many of the themes on WordPress have customization options so you can make your blog stand out from others using the same theme.  The default theme Twenty Ten lets you customize both the header and background image, and many themes have similar options. To choose a new header image, select the Header page under Appearance.  Select one of the pre-installed images and click Save Changes, or upload your own image. If you upload an image larger than the size for the header, WordPress will let you crop it directly in the web interface.  Click Crop Header when you’ve selected the portion you want for the header of your blog. You can also customize your blog’s background from the Background page under Appearance.  You can upload an image for the background, or can enter a hex value of a color for a solid background.  If you’d rather visually choose a color, click Select a Color to open a color wheel that makes it easy to choose a nice color.  Click Save Changes when you’re done. Note: that all themes may not contain these customization options, but many are flexible.  You cannot edit the actual CSS of your theme on free WordPress blogs, but you you can purchase the Custom CSS Upgrade for $14.97/year to add this ability. Add Widgets With Extra Content Widgets are small addons for your blog, similar to Desktop Gadgets in Windows 7 or Dashboard widgets in Mac OS X.  You can add widgets to your blog to show recent Tweets, favorite Flickr pictures, popular articles, and more.  To add widgets to your blog, open the Widgets page under Appearance. You’ll see a variety of widgets available in the main white box.  Select one you want to add, and drag it to the widget area of your choice.  Different themes may offer different areas to place Widgets, such as the sidebar or footer. Most of the widgets offer configuration options.  Click the down arrow beside its name to edit it.  Set them up as you wish, and click Save on the bottom of the widget. Now we’ve got some nice dynamic content on our blog that’s automatically updated from the net. Choose Blog Extras By default, WordPress shows previews of websites when visitors hover over links on your blog, uses a special mobile theme when people visit from a mobile device, and shows related links to other blogs on the WordPress network at the end of your posts.  If you don’t like these features, you can disable them on the Extras page under Appearance. Build Your Audience Now that your blog is looking nice, we can make sure others will discover it.  WordPress makes it easy for you to make your site discoverable on search engines or social network, and even gives you the option to keep your site private if you’d prefer.  Open the Privacy page under Tools to change your site’s visibility.  By default, it will be indexed by search engines and be viewable to everyone.  You can also choose to leave your blog public but block search engines, or you can make it fully private. If you choose to make your blog private, you can enter up to 35 usernames of people you want to be able to see it.  Each private visitor must have a WordPress.com account so they can login.  If you need more than 35 private members, you can upgrade to allow unlimited private members for $29.97/year. Then, if you do want your site visible from search engines, one of the best ways to make sure your content is discovered by search engines is to register with their webmaster tools.  Once registered, you need to add your key to your site so the search engine will find and index it.  On the bottom of the Tools page, WordPress lets you enter your key from Google, Bing, and Yahoo! to make sure your site is discovered.  If you haven’t signed up with these tools yet, you can signup via the links on this page as well. Post Blog Updates to Social Networks Many people discover the sites they visit from friends and others via social networks.  WordPress makes it easy to automatically share links to your content on popular social networks.  To activate this feature, open the My Blogs page under Dashboard. Now, select the services you want to activate under the Publicize section.  This will automatically update Yahoo!, Twitter, and/or Facebook every time you publish a new post. You’ll have to authorize your connection with the social network.  With Twitter and Yahoo!, you can authorize them with only two clicks, but integrating with Facebook will take several steps.   If you’d rather share links yourself on social networks, you can get shortened URLs to your posts.  When you write a new post or edit an existing one, click the Get Shortlink button located underneath the post’s title. This will give you a small URL, usually 20 characters or less, that you can use to post on social networks such as Twitter.   This should help build your traffic, and if you want to see how many people are checking out your site, check out the stats on your Dashboard.  This shows a graph of how many people are visiting, and popular posts.  Click View All if you’d like more detailed stats including search engine terms that lead people to your blog. Conclusion Whether you’re looking to make a private blog for your group or publish a blog that’s read by millions around the world, WordPress is a great way to do it for free.  And with all of the personalization options, you can make your it memorable and exciting for your visitors. If you don’t have a blog, you can always signup for a free one from WordPress.com.  Also make sure to check out our article on how to Start Your Own Blog with WordPress. Similar Articles Productive Geek Tips Manage Your WordPress Blog Comments from Your Windows DesktopAdd Social Bookmarking (Digg This!) Links to your Wordpress BlogHow-To Geek SoftwareMake a Backup Copy of your Production Wordpress Blog on UbuntuOops! Sorry About the Feed Errors TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 VMware Workstation 7 Acronis Online Backup Windows Firewall with Advanced Security – How To Guides Sculptris 1.0, 3D Drawing app AceStock, a Tiny Desktop Quote Monitor Gmail Button Addon (Firefox) Hyperwords addon (Firefox) Backup Outlook 2010

    Read the article

  • Neon toolkit and Gate Web Service

    - by blueomega
    I am trying to run any of the services from gate web service, in neon 2.3. Even Annie that runs so well in gate doesn't run, or better, it stay for indefinite time processing, a thing that should take no more than a couple of seconds. I run wizard, set input directory, leave file pattern as default and set a folder and name for the output ontology, shouldn't it be enough? Shouldn't i get something, even an error? I think its the location who's giving me problems. http://safekeeper1.dcs.shef.ac.uk/neon/services/sardine http://safekeeper1.dcs.shef.ac.uk/neon/services/sprat http://safekeeper1.dcs.shef.ac.uk/neon/services/annie http://safekeeper1.dcs.shef.ac.uk/neon/services/termraider How can i confirm it? Can i run it offline? Can anyone give me a hand? Also, i've seen sprat running on gate, on "SPRAT: a tool for automatic semantic pattern-based ontology population" Can anyone teach me how, and with what versions? Thx, Celso Costa

    Read the article

  • Best practices for defining and initializing variables in web.xml and then accessing them from Java

    - by DutrowLLC
    I would like to define and initialize some variables in web.xml and the access the values of these variables inside my Java application. The reason I want to do this is because I would like to be able to change the values of these variables without having to recompile the code. What is the best practice for doing this? Most of the variables are just strings, maybe some numbers as well. Does the class that accesses the variables have to be a servlet? Thanks! Chris

    Read the article

  • Web framework for an application utilizing existing database?

    - by tputkonen
    A legacy web application written using PHP and utilizing MySql database needs to be rewritten completely. However, the existing database structure must not be changed at all. I'm looking for suggestions on which framework would be most suitable for this task? Language candidates are Python, PHP, Ruby and Java. According to many sources it might be challenging to utilize rails effectively with existing database. Also I have not found a way to automatically generate models out of the database. With Django it's very easy to generate models automatically. However I'd appreciate first hand experience on its suitability to work with legacy DBs. Also I appreciate suggestions of other frameworks worth considering.

    Read the article

  • Much Ado About Nothing: Stub Objects

    - by user9154181
    The Solaris 11 link-editor (ld) contains support for a new type of object that we call a stub object. A stub object is a shared object, built entirely from mapfiles, that supplies the same linking interface as the real object, while containing no code or data. Stub objects cannot be executed — the runtime linker will kill any process that attempts to load one. However, you can link to a stub object as a dependency, allowing the stub to act as a proxy for the real version of the object. You may well wonder if there is a point to producing an object that contains nothing but linking interface. As it turns out, stub objects are very useful for building large bodies of code such as Solaris. In the last year, we've had considerable success in applying them to one of our oldest and thorniest build problems. In this discussion, I will describe how we came to invent these objects, and how we apply them to building Solaris. This posting explains where the idea for stub objects came from, and details our long and twisty journey from hallway idea to standard link-editor feature. I expect that these details are mainly of interest to those who work on Solaris and its makefiles, those who have done so in the past, and those who work with other similar bodies of code. A subsequent posting will omit the history and background details, and instead discuss how to build and use stub objects. If you are mainly interested in what stub objects are, and don't care about the underlying software war stories, I encourage you to skip ahead. The Long Road To Stubs This all started for me with an email discussion in May of 2008, regarding a change request that was filed in 2002, entitled: 4631488 lib/Makefile is too patient: .WAITs should be reduced This CR encapsulates a number of cronic issues with Solaris builds: We build Solaris with a parallel make (dmake) that tries to build as much of the code base in parallel as possible. There is a lot of code to build, and we've long made use of parallelized builds to get the job done quicker. This is even more important in today's world of massively multicore hardware. Solaris contains a large number of executables and shared objects. Executables depend on shared objects, and shared objects can depend on each other. Before you can build an object, you need to ensure that the objects it needs have been built. This implies a need for serialization, which is in direct opposition to the desire to build everying in parallel. To accurately build objects in the right order requires an accurate set of make rules defining the things that depend on each other. This sounds simple, but the reality is quite complex. In practice, having programmers explicitly specify these dependencies is a losing strategy: It's really hard to get right. It's really easy to get it wrong and never know it because things build anyway. Even if you get it right, it won't stay that way, because dependencies between objects can change over time, and make cannot help you detect such drifing. You won't know that you got it wrong until the builds break. That can be a long time after the change that triggered the breakage happened, making it hard to connect the cause and the effect. Usually this happens just before a release, when the pressure is on, its hard to think calmly, and there is no time for deep fixes. As a poor compromise, the libraries in core Solaris were built using a set of grossly incomplete hand written rules, supplemented with a number of dmake .WAIT directives used to group the libraries into sets of non-interacting groups that can be built in parallel because we think they don't depend on each other. From time to time, someone will suggest that we could analyze the built objects themselves to determine their dependencies and then generate make rules based on those relationships. This is possible, but but there are complications that limit the usefulness of that approach: To analyze an object, you have to build it first. This is a classic chicken and egg scenario. You could analyze the results of a previous build, but then you're not necessarily going to get accurate rules for the current code. It should be possible to build the code without having a built workspace available. The analysis will take time, and remember that we're constantly trying to make builds faster, not slower. By definition, such an approach will always be approximate, and therefore only incremantally more accurate than the hand written rules described above. The hand written rules are fast and cheap, while this idea is slow and complex, so we stayed with the hand written approach. Solaris was built that way, essentially forever, because these are genuinely difficult problems that had no easy answer. The makefiles were full of build races in which the right outcomes happened reliably for years until a new machine or a change in build server workload upset the accidental balance of things. After figuring out what had happened, you'd mutter "How did that ever work?", add another incomplete and soon to be inaccurate make dependency rule to the system, and move on. This was not a satisfying solution, as we tend to be perfectionists in the Solaris group, but we didn't have a better answer. It worked well enough, approximately. And so it went for years. We needed a different approach — a new idea to cut the Gordian Knot. In that discussion from May 2008, my fellow linker-alien Rod Evans had the initial spark that lead us to a game changing series of realizations: The link-editor is used to link objects together, but it only uses the ELF metadata in the object, consisting of symbol tables, ELF versioning sections, and similar data. Notably, it does not look at, or understand, the machine code that makes an object useful at runtime. If you had an object that only contained the ELF metadata for a dependency, but not the code or data, the link-editor would find it equally useful for linking, and would never know the difference. Call it a stub object. In the core Solaris OS, we require all objects to be built with a link-editor mapfile that describes all of its publically available functions and data. Could we build a stub object using the mapfile for the real object? It ought to be very fast to build stub objects, as there are no input objects to process. Unlike the real object, stub objects would not actually require any dependencies, and so, all of the stubs for the entire system could be built in parallel. When building the real objects, one could link against the stub objects instead of the real dependencies. This means that all the real objects can be built built in parallel too, without any serialization. We could replace a system that requires perfect makefile rules with a system that requires no ordering rules whatsoever. The results would be considerably more robust. We immediately realized that this idea had potential, but also that there were many details to sort out, lots of work to do, and that perhaps it wouldn't really pan out. As is often the case, it would be necessary to do the work and see how it turned out. Following that conversation, I set about trying to build a stub object. We determined that a faithful stub has to do the following: Present the same set of global symbols, with the same ELF versioning, as the real object. Functions are simple — it suffices to have a symbol of the right type, possibly, but not necessarily, referencing a null function in its text segment. Copy relocations make data more complicated to stub. The possibility of a copy relocation means that when you create a stub, the data symbols must have the actual size of the real data. Any error in this will go uncaught at link time, and will cause tragic failures at runtime that are very hard to diagnose. For reasons too obscure to go into here, involving tentative symbols, it is also important that the data reside in bss, or not, matching its placement in the real object. If the real object has more than one symbol pointing at the same data item, we call these aliased symbols. All data symbols in the stub object must exhibit the same aliasing as the real object. We imagined the stub library feature working as follows: A command line option to ld tells it to produce a stub rather than a real object. In this mode, only mapfiles are examined, and any object or shared libraries on the command line are are ignored. The extra information needed (function or data, size, and bss details) would be added to the mapfile. When building the real object instead of the stub, the extra information for building stubs would be validated against the resulting object to ensure that they match. In exploring these ideas, I immediately run headfirst into the reality of the original mapfile syntax, a subject that I would later write about as The Problem(s) With Solaris SVR4 Link-Editor Mapfiles. The idea of extending that poor language was a non-starter. Until a better mapfile syntax became available, which seemed unlikely in 2008, the solution could not involve extentions to the mapfile syntax. Instead, we cooked up the idea (hack) of augmenting mapfiles with stylized comments that would carry the necessary information. A typical definition might look like: # DATA(i386) __iob 0x3c0 # DATA(amd64,sparcv9) __iob 0xa00 # DATA(sparc) __iob 0x140 iob; A further problem then became clear: If we can't extend the mapfile syntax, then there's no good way to extend ld with an option to produce stub objects, and to validate them against the real objects. The idea of having ld read comments in a mapfile and parse them for content is an unacceptable hack. The entire point of comments is that they are strictly for the human reader, and explicitly ignored by the tool. Taking all of these speed bumps into account, I made a new plan: A perl script reads the mapfiles, generates some small C glue code to produce empty functions and data definitions, compiles and links the stub object from the generated glue code, and then deletes the generated glue code. Another perl script used after both objects have been built, to compare the real and stub objects, using data from elfdump, and validate that they present the same linking interface. By June 2008, I had written the above, and generated a stub object for libc. It was a useful prototype process to go through, and it allowed me to explore the ideas at a deep level. Ultimately though, the result was unsatisfactory as a basis for real product. There were so many issues: The use of stylized comments were fine for a prototype, but not close to professional enough for shipping product. The idea of having to document and support it was a large concern. The ideal solution for stub objects really does involve having the link-editor accept the same arguments used to build the real object, augmented with a single extra command line option. Any other solution, such as our prototype script, will require makefiles to be modified in deeper ways to support building stubs, and so, will raise barriers to converting existing code. A validation script that rederives what the linker knew when it built an object will always be at a disadvantage relative to the actual linker that did the work. A stub object should be identifyable as such. In the prototype, there was no tag or other metadata that would let you know that they weren't real objects. Being able to identify a stub object in this way means that the file command can tell you what it is, and that the runtime linker can refuse to try and run a program that loads one. At that point, we needed to apply this prototype to building Solaris. As you might imagine, the task of modifying all the makefiles in the core Solaris code base in order to do this is a massive task, and not something you'd enter into lightly. The quality of the prototype just wasn't good enough to justify that sort of time commitment, so I tabled the project, putting it on my list of long term things to think about, and moved on to other work. It would sit there for a couple of years. Semi-coincidentally, one of the projects I tacked after that was to create a new mapfile syntax for the Solaris link-editor. We had wanted to do something about the old mapfile syntax for many years. Others before me had done some paper designs, and a great deal of thought had already gone into the features it should, and should not have, but for various reasons things had never moved beyond the idea stage. When I joined Sun in late 2005, I got involved in reviewing those things and thinking about the problem. Now in 2008, fresh from relearning for the Nth time why the old mapfile syntax was a huge impediment to linker progress, it seemed like the right time to tackle the mapfile issue. Paving the way for proper stub object support was not the driving force behind that effort, but I certainly had them in mind as I moved forward. The new mapfile syntax, which we call version 2, integrated into Nevada build snv_135 in in February 2010: 6916788 ld version 2 mapfile syntax PSARC/2009/688 Human readable and extensible ld mapfile syntax In order to prove that the new mapfile syntax was adequate for general purpose use, I had also done an overhaul of the ON consolidation to convert all mapfiles to use the new syntax, and put checks in place that would ensure that no use of the old syntax would creep back in. That work went back into snv_144 in June 2010: 6916796 OSnet mapfiles should use version 2 link-editor syntax That was a big putback, modifying 517 files, adding 18 new files, and removing 110 old ones. I would have done this putback anyway, as the work was already done, and the benefits of human readable syntax are obvious. However, among the justifications listed in CR 6916796 was this We anticipate adding additional features to the new mapfile language that will be applicable to ON, and which will require all sharable object mapfiles to use the new syntax. I never explained what those additional features were, and no one asked. It was premature to say so, but this was a reference to stub objects. By that point, I had already put together a working prototype link-editor with the necessary support for stub objects. I was pleased to find that building stubs was indeed very fast. On my desktop system (Ultra 24), an amd64 stub for libc can can be built in a fraction of a second: % ptime ld -64 -z stub -o stubs/libc.so.1 -G -hlibc.so.1 \ -ztext -zdefs -Bdirect ... real 0.019708910 user 0.010101680 sys 0.008528431 In order to go from prototype to integrated link-editor feature, I knew that I would need to prove that stub objects were valuable. And to do that, I knew that I'd have to switch the Solaris ON consolidation to use stub objects and evaluate the outcome. And in order to do that experiment, ON would first need to be converted to version 2 mapfiles. Sub-mission accomplished. Normally when you design a new feature, you can devise reasonably small tests to show it works, and then deploy it incrementally, letting it prove its value as it goes. The entire point of stub objects however was to demonstrate that they could be successfully applied to an extremely large and complex code base, and specifically to solve the Solaris build issues detailed above. There was no way to finesse the matter — in order to move ahead, I would have to successfully use stub objects to build the entire ON consolidation and demonstrate their value. In software, the need to boil the ocean can often be a warning sign that things are trending in the wrong direction. Conversely, sometimes progress demands that you build something large and new all at once. A big win, or a big loss — sometimes all you can do is try it and see what happens. And so, I spent some time staring at ON makefiles trying to get a handle on how things work, and how they'd have to change. It's a big and messy world, full of complex interactions, unspecified dependencies, special cases, and knowledge of arcane makefile features... ...and so, I backed away, put it down for a few months and did other work... ...until the fall, when I felt like it was time to stop thinking and pondering (some would say stalling) and get on with it. Without stubs, the following gives a simplified high level view of how Solaris is built: An initially empty directory known as the proto, and referenced via the ROOT makefile macro is established to receive the files that make up the Solaris distribution. A top level setup rule creates the proto area, and performs operations needed to initialize the workspace so that the main build operations can be launched, such as copying needed header files into the proto area. Parallel builds are launched to build the kernel (usr/src/uts), libraries (usr/src/lib), and commands. The install makefile target builds each item and delivers a copy to the proto area. All libraries and executables link against the objects previously installed in the proto, implying the need to synchronize the order in which things are built. Subsequent passes run lint, and do packaging. Given this structure, the additions to use stub objects are: A new second proto area is established, known as the stub proto and referenced via the STUBROOT makefile macro. The stub proto has the same structure as the real proto, but is used to hold stub objects. All files in the real proto are delivered as part of the Solaris product. In contrast, the stub proto is used to build the product, and then thrown away. A new target is added to library Makefiles called stub. This rule builds the stub objects. The ld command is designed so that you can build a stub object using the same ld command line you'd use to build the real object, with the addition of a single -z stub option. This means that the makefile rules for building the stub objects are very similar to those used to build the real objects, and many existing makefile definitions can be shared between them. A new target is added to the Makefiles called stubinstall which delivers the stub objects built by the stub rule into the stub proto. These rules reuse much of existing plumbing used by the existing install rule. The setup rule runs stubinstall over the entire lib subtree as part of its initialization. All libraries and executables link against the objects in the stub proto rather than the main proto, and can therefore be built in parallel without any synchronization. There was no small way to try this that would yield meaningful results. I would have to take a leap of faith and edit approximately 1850 makefiles and 300 mapfiles first, trusting that it would all work out. Once the editing was done, I'd type make and see what happened. This took about 6 weeks to do, and there were many dark days when I'd question the entire project, or struggle to understand some of the many twisted and complex situations I'd uncover in the makefiles. I even found a couple of new issues that required changes to the new stub object related code I'd added to ld. With a substantial amount of encouragement and help from some key people in the Solaris group, I eventually got the editing done and stub objects for the entire workspace built. I found that my desktop system could build all the stub objects in the workspace in roughly a minute. This was great news, as it meant that use of the feature is effectively free — no one was likely to notice or care about the cost of building them. After another week of typing make, fixing whatever failed, and doing it again, I succeeded in getting a complete build! The next step was to remove all of the make rules and .WAIT statements dedicated to controlling the order in which libraries under usr/src/lib are built. This came together pretty quickly, and after a few more speed bumps, I had a workspace that built cleanly and looked like something you might actually be able to integrate someday. This was a significant milestone, but there was still much left to do. I turned to doing full nightly builds. Every type of build (open, closed, OpenSolaris, export, domestic) had to be tried. Each type failed in a new and unique way, requiring some thinking and rework. As things came together, I became aware of things that could have been done better, simpler, or cleaner, and those things also required some rethinking, the seeking of wisdom from others, and some rework. After another couple of weeks, it was in close to final form. My focus turned towards the end game and integration. This was a huge workspace, and needed to go back soon, before changes in the gate would made merging increasingly difficult. At this point, I knew that the stub objects had greatly simplified the makefile logic and uncovered a number of race conditions, some of which had been there for years. I assumed that the builds were faster too, so I did some builds intended to quantify the speedup in build time that resulted from this approach. It had never occurred to me that there might not be one. And so, I was very surprised to find that the wall clock build times for a stock ON workspace were essentially identical to the times for my stub library enabled version! This is why it is important to always measure, and not just to assume. One can tell from first principles, based on all those removed dependency rules in the library makefile, that the stub object version of ON gives dmake considerably more opportunities to overlap library construction. Some hypothesis were proposed, and shot down: Could we have disabled dmakes parallel feature? No, a quick check showed things being build in parallel. It was suggested that we might be I/O bound, and so, the threads would be mostly idle. That's a plausible explanation, but system stats didn't really support it. Plus, the timing between the stub and non-stub cases were just too suspiciously identical. Are our machines already handling as much parallelism as they are capable of, and unable to exploit these additional opportunities? Once again, we didn't see the evidence to back this up. Eventually, a more plausible and obvious reason emerged: We build the libraries and commands (usr/src/lib, usr/src/cmd) in parallel with the kernel (usr/src/uts). The kernel is the long leg in that race, and so, wall clock measurements of build time are essentially showing how long it takes to build uts. Although it would have been nice to post a huge speedup immediately, we can take solace in knowing that stub objects simplify the makefiles and reduce the possibility of race conditions. The next step in reducing build time should be to find ways to reduce or overlap the uts part of the builds. When that leg of the build becomes shorter, then the increased parallelism in the libs and commands will pay additional dividends. Until then, we'll just have to settle for simpler and more robust. And so, I integrated the link-editor support for creating stub objects into snv_153 (November 2010) with 6993877 ld should produce stub objects PSARC/2010/397 ELF Stub Objects followed by the work to convert the ON consolidation in snv_161 (February 2011) with 7009826 OSnet should use stub objects 4631488 lib/Makefile is too patient: .WAITs should be reduced This was a huge putback, with 2108 modified files, 8 new files, and 2 removed files. Due to the size, I was allowed a window after snv_160 closed in which to do the putback. It went pretty smoothly for something this big, a few more preexisting race conditions would be discovered and addressed over the next few weeks, and things have been quiet since then. Conclusions and Looking Forward Solaris has been built with stub objects since February. The fact that developers no longer specify the order in which libraries are built has been a big success, and we've eliminated an entire class of build error. That's not to say that there are no build races left in the ON makefiles, but we've taken a substantial bite out of the problem while generally simplifying and improving things. The introduction of a stub proto area has also opened some interesting new possibilities for other build improvements. As this article has become quite long, and as those uses do not involve stub objects, I will defer that discussion to a future article.

    Read the article

  • ASP.NET Page Caching in a Web Farm

    - by Achilles
    We have a small web farm(2 servers) balanced by the built in network load balancer in Windows 2003. We have a few pages that use page caching. My question is: Is it possible that that a given user could cause a page to be cached and another user see that content? Here is the page directive for the page in question: <%@ OutputCache Duration="1" NoStore="true" VaryByParam="none" %> The reason the duration is set to "1" is to ensure that the page isn't cached any longer than 1 second because of transactions that actions on the page can trigger.

    Read the article

  • iPhone apps: Webapps or native?

    - by jpartogi
    Hi all, I am planning to create an iPhone apps version for our online webapps. I am still new to iPhone apps development so I don't know whether to choose iPhone native or a webapps that runs on iPhone browser. The requirement is actually pretty basic. The iPhone apps need to submit data and get data from the database that is also used by the webapps. User would have the same access to the webapps, only I want this specific to iPhone, as the user experience would be different using a webapps and iPhone apps. I am also interested to sell the application on Apple store. Based on your experience, what would be better for this kind of requirement, iPhone native or webapps? What are the drawbacks building a native iPhone apps and webapps that runs on iPhone browser? Also, am I only limited to Objective-C to build a native iPhone apps? Or is there any other framework for that? Please be gentle on me, I am not starting a flamewar.

    Read the article

  • Firefox doesn't show my CSS

    - by vtortola
    Hi, I have a strange problem, Firefox doesn't show the CSS of the page I'm doing, but Internet Explorer does. I have tried at home and at one of my friend's home, and it happens in both. But, if I go to the Firefox Web Developer toolbar (i have it installed) and select CSS=Edit CSS, then the styles appears appears in the page and in the editor! As soon I close it, they disappears again. I have no idea what the problem is :( Do you have any idea about what could be the problem? thanks in advance.

    Read the article

  • Custom Application Page does not recognize tagPrefix defined in custom web.config

    - by Hasan Khan
    I have a custom application page integrated in Centeral Administration. My application pages are placed in a subfolder in Template\Admin folder. I placed my own web.config in my subfolder and added tagPrefixes in the control section. However when I open my application page ASP.NET throws the exception that the tag SharePoint:InputFormTextBox was not recognized. What am I doing wrong? Or it doesn't work this way? What can i do to achieve this?

    Read the article

  • Asp.Net : Web service throws "Thread was being aborted"

    - by Master Morality
    I have a web service that I call via ajax that requires the user to be logged in. in each method I want to check if the user is logged in and send a 403 code if they are not, however when I call a Response.End() I get the error "Thread was being aborted". What should I call instead? [WebMethod(true)] public string MyMethod() { if(!userIsLoggedIn) { HttpContext.Current.Response.StatusCode = 403; HttpContext.Current.Response.End(); } /* Do stuff that should not execute unless the user is logged in... */ ... }

    Read the article

  • best way to authenticate and consume web service using phonegap (html5/javascript)

    - by Raiss
    I am going to develop a phonegap application which is pretty simple. I need to implement an authentication and some simple data transfer back and forth to the phone and server. I prefer to use ASP.NET as a web service and our database is MS SQL but I am not sure what approach should I take to create a secure communication between Phonegap App and webservice. The problem with a simple AJAX request is limitation in cross-domain and I’m not sure if JSONP is a good option. I was wondering if someone can tell me what technology I should use in order to make a semi secure connection which works with PhoneGap (html5, javascript ) and .Net webservice. I understand that it’s a general question but I need to know what technology is the best in such a case. thanks

    Read the article

  • How to transfer SQLite db to web server

    - by Shane
    Hi, I have an application that creates an SQLite database and saves information to it over the course of a day. At the end of the day i want to export this database to a web server. Could anyone point me in the right direction for this? Should I use httppost or put. I have researched this myself online but there seems to be so many different ways to explore. The server side does not exist yet either. I have access to an apache server so i am hoping to use that. Could anyone advise me the best/most simple way to do this? Thanks

    Read the article

  • Google OpenID changes?!?!?

    - by Andrea
    I'm trying to implement OpenId login for a web application. Whenever new user who logs in via OpenId I create a new user on the sustem, and among the data I store their openid URL, so that next time they login with that user. I'm testing this with my Gmail OpenID, and the problem is that everytime I do this, Google sends a different openid URL, that is, https://www.google.com/accounts/o8/id?id=SomethingThatChangesFromTimeToTime Of course I'm then not able to tell wheter this is or not a new user. I'm a bit puzzled: shouldn't the openid identifier always remain the same?

    Read the article

  • Is that a RESTFUL MVC Web Service?

    - by vsj
    I am aware of Web Services and WCF but I have generic question with services.I have a ASP.NET MVC Application which does some basic functionality. I just have a controller in which I am passing it the records and serializing the information to XML using XML Serializer. Then I return this information to the browser and it displays me the XML i got from the Controller Action. So I get the XML representation of my Class(Database Object) in XML and I am to give the URL of this application to the client and access and pull the information. Is this a Service then? I mean in the end all the Clients need is the Xml representation through services also right? I am not that experienced and probably being very silly but please help me out...if I provide xml this way to the client is that a Service ? Or is there something I need to undersatand here?.

    Read the article

  • [Web] Eventlistener for form input on iphone?

    - by ketenshi
    I'm playing around with jQTouch to create a web app on the iPhone. I'm using the scrolling extension to create the effect of a fixed toolbar on the top of the page while still able to scroll the rest of the page via a scrollable div. Everything works fine except for when a user pulls up the keyboard in order to fill in form elements in the scrollable div. The whole body is pushed to top and the ugly url bar is shown. Is there a way to prevent this?

    Read the article

  • Retrieving JSON from a web URL

    - by npeterson
    This may be a terribly uninformed question, brace yourself. A company I'm working with has given an 'API' that I can use to access orders, however, there are only two real commands, getorders and getorderdetails. These commands are put in the format of http://www.server.com/path/to/the/orderapi/getorders/UniqueKey/ If I go to that web address, I'm prompted for a username and password, and once authenticating, get presented with a page of JSON formatted order details, contained in the body of the html page. I would like a service to check this information and create orders in our crm based on it, is there an obvious way to access it without the browser/client interaction?

    Read the article

< Previous Page | 106 107 108 109 110 111 112 113 114 115 116 117  | Next Page >