Search Results

Search found 25629 results on 1026 pages for 'site maintenance'.

Page 648/1026 | < Previous Page | 644 645 646 647 648 649 650 651 652 653 654 655  | Next Page >

  • JavaOne Latin America Schedule Changes For Thursday

    - by Tori Wieldt
    tweetmeme_url = 'http://blogs.oracle.com/javaone/2010/12/javaone_latin_america_schedule_changes_for_thursday.html'; Share .FBConnectButton_Small{background-position:-5px -232px !important;border-left:1px solid #1A356E;} .FBConnectButton_Text{margin-left:12px !important ;padding:2px 3px 3px !important;} The good news: we've got LOTS of developers at JavaOne Latin America.The bad news: the rooms are too small to hold everyone! (we've heard you)The good news: selected sessions for Thursday have been moved larger rooms (the keynote halls) More good news: some sessions that were full from Wednesday will be repeated on Thursday. SCHEDULE CHANGES FOR THURSDAY, DECEMBER 9THNote: Be sure to check the schedule on site, there still may be some last minute changes. Session Name Speaker New Time/Room Ginga, LWUIT, JavaDTV and You 2.0 Dimas Oliveria Thursday, December 9, 11:15am - 12:00pm Auditorio 4 JavaFX do seu jeito: criando aplicativos JavaFX com linguagens alternativas Stephen Chin Thursday, December 9, 3:00pm - 3:45pm Auditorio 4 Automatizando sua casa usando Java; JavaME, JavaFX, e Open Source Hardware Vinicius Senger Thursday, December 9, 9:00am - 9:45am Auditorio 3 Construindo uma arquitetura RESTful para aplicacoes ricas com HTML 5 e JSF2 Raphael Helmonth Adrien Caetano Thursday, December 9, 5:15pm - 6:00pm Auditorio 2 Dicas eTruquies sobre performance em Java EE JPA e JSF Alberto Lemos e Danival Taffarel Calegari Thursday, December 9, 2:00pm - 2:45pm Auditorio 2 Escrevendo Aplicativos Multipatforma Incriveis Usando LWUIT Roger Brinkley Cancelled Platforma NetBeans: sem slide - apenas codigo Mauricio Leal Cancelled Escalando o seu AJAX Push com Servlet 3.0 Paulo Silveria Keynote Hall 9:00am - 9:45am Cobetura Completa de Ferramentas para a Platforma Java EE 6 Ludovic Champenois Keynote Hall 10:00am - 10:45am Servlet 3.0 - Expansivel, Assincrono e Facil de Usar Arun Gupta Keynote Hall 4:00pm - 4:45pm Transforme seu processo em REST com JAX-RS Guilherme Silveria Keynote Hall 5:00pm - 5:45pm The Future of Java Fabiane Nardon e Bruno Souza Keynote Hall 6:00pm - 6:45pm Thanks for your understanding, we are tuning the conference to make it the best JavaOne possible.

    Read the article

  • Google images sometimes terribly slow when using dnsmasq

    - by Joril
    Hi everyone! I am the admin of a small LAN of 10+ computers. I've set up a dnsmasq server for DHCP and DNS resolution, and it's working almost fine.. My problem is that when I try to use Google images, sometimes it takes ages to show the actual images. I get just the textual part of the page (menus and so on) while the images themselves are shown as the still-loading-white boxes.. When I use the DSL router directly as DNS, the site works fine all the time. The problem sometimes presents itself with Google maps too.. The map takes ages to load. Any idea on what I could try to troubleshoot this? (dnsmasq 2.47 on CentOS 5.2 64bit, our outside connection is an asymmetrical 4Mbps DSL)

    Read the article

  • Google images sometimes terribly slow when using dnsmasq

    - by Joril
    Hi everyone! I am the admin of a small LAN of 10+ computers. I've set up a dnsmasq server for DHCP and DNS resolution, and it's working almost fine.. My problem is that when I try to use Google images, sometimes it takes ages to show the actual images. I get just the textual part of the page (menus and so on) while the images themselves are shown as the still-loading-white boxes.. When I use the DSL router directly as DNS, the site works fine all the time. The problem sometimes presents itself with Google maps too.. The map takes ages to load. Any idea on what I could try to troubleshoot this? (dnsmasq 2.47 on CentOS 5.2 64bit, our outside connection is an asymmetrical 4Mbps DSL)

    Read the article

  • Scheme vs Common Lisp: war stories

    - by SuperElectric
    There are no shortage of vague "Scheme vs Common Lisp" questions on both StackOverflow and on this site, so I want to make this one more focused. The question is for people who have coded in both languages: While coding in Scheme, what specific elements of your Common Lisp coding experience did you miss most? Or, inversely, while coding in Common Lisp, what did you miss from coding in Scheme? I don't necessarily mean just language features. The following are all valid things to miss, as far as the question is concerned: Specific libraries. Specific features of development environments like SLIME, DrRacket, etc. Features of particular implementations, like Gambit's ability to write blocks of C code directly into your Scheme source. And of course, language features. Examples of the sort of answers I'm hoping for: "I was trying to implement X in Common Lisp, and if I had Scheme's first-class continuations, I totally would've just done Y, but instead I had to do Z, which was more of a pain." "Scripting the build process in my Scheme project got increasingly painful as my source tree grew and I linked in more and more C libraries. For my next project, I moved back to Common Lisp." "I have a large existing C++ codebase, and for me, being able to embed C++ calls directly in my Gambit Scheme code was totally worth any shortcomings that Scheme may have vs Common Lisp, even including lack of SWIG support." So, I'm hoping for war stories, rather than general sentiments like "Scheme is a simpler language" etc.

    Read the article

  • Load balancing and HTTPS strategies

    - by Dan
    I am faced with the following problem: Servers get saturated since current load balancing strategy is based on client IP. Some corporate clients access our servers from behind large proxies so all clients appear with same IP to our load balancer. I think we are using some hardware load balancing device (can investigate further if necessary). We need to maintain session affinity (site is constructed in ASP), so all requests with same IP get routed to the same node. Since all the communication goes over the HTTPS, no request data (like session Id) is available to balancer as a client discriminator. Is there a way to use some other data besides the IP to distinguish between clients and route the clients even when coming from same IP to different nodes? Note: I need to maintain the traffic between the balancer and nodes safe (encrypted).

    Read the article

  • Consuming Hello World pagelet in WebCenter Spaces

    - by astemkov
    Introduction The goal of this exercise is to show you how can you use Hello World pagelet that you just created from your web space. Assumptions Let's assume the following: Pagelet Producer is running on http://pageletserver.company.com:8889/pagelets/ WebCenter is running on http://webcenter.company.com:8888/webcenter/ You created Hello_World pagelet as described here. For our exercise we will need a space created. So let's login into WebCenter Portal and create a space called "myspace" using "Portal Site" template: Registering Pagelet Producer with WebCenter portal In order to use our newly created pagelet from WebCenter Spaces, we first need to register Pagelet Producer: Click "Administraion" link on WebCenter toolbar Open the "Configuration" tab Click on "Services" link on the upper-left corner of the page Click on "Portlet Producers" link on the right hand pane of the screen Click on "Register" button Select "Pagelet Producer" radio button and type Producer Name = "MyPageletProducer" Server URL = http://pageletserver.company.com:8889/pagelets/ Click "Test" button If everything is succesful you will see the following screen: Now click "OK'. Pagelet producer is registered: Inserting Hello World pagelet to WebCenter Space Now let's insert Hello World pagelet into "myspace" page: Let's go back to "myspace", click on the icon in a upper-right corner of the page and select "Edit Page" Click on one of the "Add Content" buttons: Select "Mash-Ups": Select "Pagelet Producers: You will see the MyPageletProducer that we just registered: Click on it. You will see the library "MyLib" that contains our "Hello_World" pagelet. Click on "MyLib" and you will see "Hello_World" pagelet. Click on "Add" button, and then "Close" button. Click "Save" button, and then "Close". Now we see that our "Hello World" pagelet is inserted into "myspace" page:

    Read the article

  • GoodFil.ms Suggests New Movies Based on Friends’ Picks

    - by Jason Fitzpatrick
    Goodfil.ms is a movie suggestion engine that doesn’t suggest movies based on what the critics say or how many anonymous internet points a movie has received, but instead takes into account your personal tastes and the tastes of your friends. From the Goodfil.ms FAQ: Films are social. The best way to find movies is through the people you know. We’ve designed Goodfilms from the ground up to show you what your existing friends are watching and rating, and to focus on showing you what the people around you think about films instead of a random grab bag of “internet voters” or highly specialised critics. Their FAQ file is filled with links to detailed posts about the specifics of the process, so if you’re the curious type we strongly suggest checking it out. In addition to the social-ranking side of Goodfil.ms there’s an excellent “Recent Releases” section for major streaming services like iTunes, Netflix, and Amazon Prime–even if you don’t sign up for the social side of the site you can still keep an eye on the best new releases across the board. What To Do If You Get a Virus on Your Computer Why Enabling “Do Not Track” Doesn’t Stop You From Being Tracked HTG Explains: What is the Windows Page File and Should You Disable It?

    Read the article

  • Using Microsoft benefits to kickstart your own development

    - by douglasscott
    Working for a big company I enjoy all the Microsoft tools I can consume. I also have the infrastructure to support my development and team communication.I recently helped form a small consulting team that requires the same type of resources. That is when the realization of the true cost of Microsoft's professional development tools really hit me.Okay, I'll just bite the bullet and get what I'm used to working with to do high quality development projects.  After just a few minutes of looking at street prices and doing some quick math I began to have a realization...doing this right isn't cheap!Luckily there is help.  If you are willing to get your ducks in a row and do a little documentation  Microsoft will give you some developer manna. I went to the Bizspark site and completed the application which describes your company profile and services offer.  The approval process took about a week.  Voila, A Visual Studio Ultimate with MSDN Subscription!As a start-up Office 365 can be a great solution for all your team communications.  I also enrolled in the Microsoft Cloud Essentials program as part of a business track.  Once you meet the Cloud Essentials requirements you will receive 250 Office 365 licenses! This includes Office and hosted Exchange, Lync, and SharePoint.Take advantage of what Microsoft has to offer for your start-up.  It just may surprise you and save you a lot of your start-up budget.

    Read the article

  • SPARC T4-4 Beats 8-CPU IBM POWER7 on TPC-H @3000GB Benchmark

    - by Brian
    Oracle's SPARC T4-4 server delivered a world record TPC-H @3000GB benchmark result for systems with four processors. This result beats eight processor results from IBM (POWER7) and HP (x86). The SPARC T4-4 server also delivered better performance per core than these eight processor systems from IBM and HP. Comparisons below are based upon system to system comparisons, highlighting Oracle's complete software and hardware solution. This database world record result used Oracle's Sun Storage 2540-M2 arrays (rotating disk) connected to a SPARC T4-4 server running Oracle Solaris 11 and Oracle Database 11g Release 2 demonstrating the power of Oracle's integrated hardware and software solution. The SPARC T4-4 server based configuration achieved a TPC-H scale factor 3000 world record for four processor systems of 205,792 QphH@3000GB with price/performance of $4.10/QphH@3000GB. The SPARC T4-4 server with four SPARC T4 processors (total of 32 cores) is 7% faster than the IBM Power 780 server with eight POWER7 processors (total of 32 cores) on the TPC-H @3000GB benchmark. The SPARC T4-4 server is 36% better in price performance compared to the IBM Power 780 server on the TPC-H @3000GB Benchmark. The SPARC T4-4 server is 29% faster than the IBM Power 780 for data loading. The SPARC T4-4 server is up to 3.4 times faster than the IBM Power 780 server for the Refresh Function. The SPARC T4-4 server with four SPARC T4 processors is 27% faster than the HP ProLiant DL980 G7 server with eight x86 processors on the TPC-H @3000GB benchmark. The SPARC T4-4 server is 52% faster than the HP ProLiant DL980 G7 server for data loading. The SPARC T4-4 server is up to 3.2 times faster than the HP ProLiant DL980 G7 for the Refresh Function. The SPARC T4-4 server achieved a peak IO rate from the Oracle database of 17 GB/sec. This rate was independent of the storage used, as demonstrated by the TPC-H @3000TB benchmark which used twelve Sun Storage 2540-M2 arrays (rotating disk) and the TPC-H @1000TB benchmark which used four Sun Storage F5100 Flash Array devices (flash storage). [*] The SPARC T4-4 server showed linear scaling from TPC-H @1000GB to TPC-H @3000GB. This demonstrates that the SPARC T4-4 server can handle the increasingly larger databases required of DSS systems. [*] The SPARC T4-4 server benchmark results demonstrate a complete solution of building Decision Support Systems including data loading, business questions and refreshing data. Each phase usually has a time constraint and the SPARC T4-4 server shows superior performance during each phase. [*] The TPC believes that comparisons of results published with different scale factors are misleading and discourages such comparisons. Performance Landscape The table lists the leading TPC-H @3000GB results for non-clustered systems. TPC-H @3000GB, Non-Clustered Systems System Processor P/C/T – Memory Composite(QphH) $/perf($/QphH) Power(QppH) Throughput(QthH) Database Available SPARC Enterprise M9000 3.0 GHz SPARC64 VII+ 64/256/256 – 1024 GB 386,478.3 $18.19 316,835.8 471,428.6 Oracle 11g R2 09/22/11 SPARC T4-4 3.0 GHz SPARC T4 4/32/256 – 1024 GB 205,792.0 $4.10 190,325.1 222,515.9 Oracle 11g R2 05/31/12 SPARC Enterprise M9000 2.88 GHz SPARC64 VII 32/128/256 – 512 GB 198,907.5 $15.27 182,350.7 216,967.7 Oracle 11g R2 12/09/10 IBM Power 780 4.1 GHz POWER7 8/32/128 – 1024 GB 192,001.1 $6.37 210,368.4 175,237.4 Sybase 15.4 11/30/11 HP ProLiant DL980 G7 2.27 GHz Intel Xeon X7560 8/64/128 – 512 GB 162,601.7 $2.68 185,297.7 142,685.6 SQL Server 2008 10/13/10 P/C/T = Processors, Cores, Threads QphH = the Composite Metric (bigger is better) $/QphH = the Price/Performance metric in USD (smaller is better) QppH = the Power Numerical Quantity QthH = the Throughput Numerical Quantity The following table lists data load times and refresh function times during the power run. TPC-H @3000GB, Non-Clustered Systems Database Load & Database Refresh System Processor Data Loading(h:m:s) T4Advan RF1(sec) T4Advan RF2(sec) T4Advan SPARC T4-4 3.0 GHz SPARC T4 04:08:29 1.0x 67.1 1.0x 39.5 1.0x IBM Power 780 4.1 GHz POWER7 05:51:50 1.5x 147.3 2.2x 133.2 3.4x HP ProLiant DL980 G7 2.27 GHz Intel Xeon X7560 08:35:17 2.1x 173.0 2.6x 126.3 3.2x Data Loading = database load time RF1 = power test first refresh transaction RF2 = power test second refresh transaction T4 Advan = the ratio of time to T4 time Complete benchmark results found at the TPC benchmark website http://www.tpc.org. Configuration Summary and Results Hardware Configuration: SPARC T4-4 server 4 x SPARC T4 3.0 GHz processors (total of 32 cores, 128 threads) 1024 GB memory 8 x internal SAS (8 x 300 GB) disk drives External Storage: 12 x Sun Storage 2540-M2 array storage, each with 12 x 15K RPM 300 GB drives, 2 controllers, 2 GB cache Software Configuration: Oracle Solaris 11 11/11 Oracle Database 11g Release 2 Enterprise Edition Audited Results: Database Size: 3000 GB (Scale Factor 3000) TPC-H Composite: 205,792.0 QphH@3000GB Price/performance: $4.10/QphH@3000GB Available: 05/31/2012 Total 3 year Cost: $843,656 TPC-H Power: 190,325.1 TPC-H Throughput: 222,515.9 Database Load Time: 4:08:29 Benchmark Description The TPC-H benchmark is a performance benchmark established by the Transaction Processing Council (TPC) to demonstrate Data Warehousing/Decision Support Systems (DSS). TPC-H measurements are produced for customers to evaluate the performance of various DSS systems. These queries and updates are executed against a standard database under controlled conditions. Performance projections and comparisons between different TPC-H Database sizes (100GB, 300GB, 1000GB, 3000GB, 10000GB, 30000GB and 100000GB) are not allowed by the TPC. TPC-H is a data warehousing-oriented, non-industry-specific benchmark that consists of a large number of complex queries typical of decision support applications. It also includes some insert and delete activity that is intended to simulate loading and purging data from a warehouse. TPC-H measures the combined performance of a particular database manager on a specific computer system. The main performance metric reported by TPC-H is called the TPC-H Composite Query-per-Hour Performance Metric (QphH@SF, where SF is the number of GB of raw data, referred to as the scale factor). QphH@SF is intended to summarize the ability of the system to process queries in both single and multiple user modes. The benchmark requires reporting of price/performance, which is the ratio of the total HW/SW cost plus 3 years maintenance to the QphH. A secondary metric is the storage efficiency, which is the ratio of total configured disk space in GB to the scale factor. Key Points and Best Practices Twelve Sun Storage 2540-M2 arrays were used for the benchmark. Each Sun Storage 2540-M2 array contains 12 15K RPM drives and is connected to a single dual port 8Gb FC HBA using 2 ports. Each Sun Storage 2540-M2 array showed 1.5 GB/sec for sequential read operations and showed linear scaling, achieving 18 GB/sec with twelve Sun Storage 2540-M2 arrays. These were stand alone IO tests. The peak IO rate measured from the Oracle database was 17 GB/sec. Oracle Solaris 11 11/11 required very little system tuning. Some vendors try to make the point that storage ratios are of customer concern. However, storage ratio size has more to do with disk layout and the increasing capacities of disks – so this is not an important metric in which to compare systems. The SPARC T4-4 server and Oracle Solaris efficiently managed the system load of over one thousand Oracle Database parallel processes. Six Sun Storage 2540-M2 arrays were mirrored to another six Sun Storage 2540-M2 arrays on which all of the Oracle database files were placed. IO performance was high and balanced across all the arrays. The TPC-H Refresh Function (RF) simulates periodical refresh portion of Data Warehouse by adding new sales and deleting old sales data. Parallel DML (parallel insert and delete in this case) and database log performance are a key for this function and the SPARC T4-4 server outperformed both the IBM POWER7 server and HP ProLiant DL980 G7 server. (See the RF columns above.) See Also Transaction Processing Performance Council (TPC) Home Page Ideas International Benchmark Page SPARC T4-4 Server oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 Enterprise Edition oracle.com OTN Sun Storage 2540-M2 Array oracle.com OTN Disclosure Statement TPC-H, QphH, $/QphH are trademarks of Transaction Processing Performance Council (TPC). For more information, see www.tpc.org. SPARC T4-4 205,792.0 QphH@3000GB, $4.10/QphH@3000GB, available 5/31/12, 4 processors, 32 cores, 256 threads; IBM Power 780 QphH@3000GB, 192,001.1 QphH@3000GB, $6.37/QphH@3000GB, available 11/30/11, 8 processors, 32 cores, 128 threads; HP ProLiant DL980 G7 162,601.7 QphH@3000GB, $2.68/QphH@3000GB available 10/13/10, 8 processors, 64 cores, 128 threads.

    Read the article

  • Configuring and managing Windows web server

    - by Mike C.
    Hello, I run a few websites and I was thinking of paying for a dedicated Windows web server from GoDaddy instead of paying for each site's hosting individually. I know enough about IIS to configure the Host Header and stuff like that, but I'm a little fuzzy about the email portion of the hosting. I have a few questions: Do I need to install an SMTP server on the web server to allow for emails to be sent/received to a website email address? Or is there another approach that I'm unaware of? Are there tools that monitor the amount of bandwidth used by the server? GoDaddy charges for bandwidth and I want to make sure I don't go over. Am I opening a can of worms that I don't really want to open by going the dedicated server route? Things like server updates, security, etc? Thanks!

    Read the article

  • Win a Free License for Windows 7 Ultimate or Silverlight Spy at Our West Palm Beach .Net User Group

    - by Sam Abraham
    Shervin Shakibi, Microsoft Regional Director, ASP.Net MVP and Microsoft Certified Trainer will be our speaker at our West Palm Beach .Net User Group May meeting,  Shervin founded the FlaDotNet Users Group Network to which our West Palm Beach .Net User Group belongs. Shervin will be talking to us about the new features of Silverlight 4.0. I am personally looking forward to attending this event as I have always found Shervin's talks fun and a great learning experience.   At the end of our meeting, we will be having a free raffle. We will be giving away 1 free Windows 7 Ultimate license and 2 free Silverlight Spy licenses as well as several books and other giveaways. Usually, everybody goes home with a freebie.  We will also continue having ample networking time while enjoying free pizza/soda sponsored by Sherlock Technology and SISCO Corporation who is a new sponsor of our group.   Koen Zwikstra, Silverlight MVP and Founder of First Floor Software has kindly offered the West Palm Beach .Net User Group several free licenses of Silverlight Spy to raffle during our meetings. We will start by raffling two copies during our May meeting.   Silverlight Spy is a very valuable tool in debugging Silverlight applications. It has been mentioned at MIX10 ( http://firstfloorsoftware.com/blog/silverlight-spy-at-mix10/) as well as by Microsoft Community Leaders (http://blogs.msdn.com/chkoenig/archive/2008/08/29/silverlight-spy.aspx)   I am using Silverlight Spy myself and will probably be using it to demonstrate Silverlight internals during my talks. I think Koen's gift to our group will bring great value to our fortunate members who end up winning the licenses. Thank you Koen for your kind gift and looking forward to meeting you all on May 25th 2010 6:30 PM at CompTec (http://www.fladotnet.com/Reg.aspx?EventID=462)   Sam Abraham Site Director - West Palm Beach .Net User Group

    Read the article

  • Scanner Installation: Epson Perfection 3170 Photo, Windows 7 & Adobe Acrobat

    - by Galaxy5727
    I installed the Driver v3.04A from Epson's Web site -- epson12180.exe that is supposedly for Windows 7. Now, the scanner works by itself, i.e. if I hit the scan button I get a dialog menu and can scan with one of the options there but not with Adobe Acrobat. What I need is to have it working from Adobe Acrobat when I go File - Create PDF - From Scanner - Custom Scan... The current state is that I see Scanner: "Please select a device" and no other available devices. I remember I used to see at least 2 other like "WIA Epson Perfection" and smth like "Epson Perfection". "WIA Epson Perfection" is the option that I am hoping to see to move on. Scanner: Epson Perfection 3170 Photo System: Windows 7, 64-bit Software: Adobe Acrobat 9 Pro

    Read the article

  • Azure, don't give me multiple VMs, give me one elastic VM

    - by FransBouma
    Yesterday, Microsoft revealed new major features for Windows Azure (see ScottGu's post). It all looks shiny and great, but after reading most of the material describing the new features, I still find the overall idea behind all of it flawed: why should I care on how much VMs my web app runs? Isn't that a problem to solve for the Windows Azure engineers / software? And what if I need the file system, why can't I simply get a virtual filesystem ? To illustrate my point, let's use a real example: a product website with a customer system/database and next to it a support site with accompanying database. Both are written in .NET, using ASP.NET and use a SQL Server database each. The product website offers files to download by customers, very simple. You have a couple of options to host these websites: Buy a server, place it in a rack at an ISP and run the sites on that server Use 'shared hosting' with an ISP, which means your sites' appdomains are running on the same machine, as well as the files stored, and the databases are hosted in the same server as the other shared databases. Hire a VM, install your OS of choice at an ISP, and host the sites on that VM, basically the same as the first option, except you don't have a physical server At some cloud-vendor, either host the sites 'shared' or in a VM. See above. With all of those options, scalability is a problem, even the cloud-based ones, though not due to the same reasons: The physical server solution has the obvious problem that if you need more power, you need to buy a bigger server or more servers which requires you to add replication and other overhead Shared hosting solutions are almost always capped on memory usage / traffic and database size: if your sites get too big, you have to move out of the shared hosting environment and start over with one of the other solutions The VM solution, be it a VM at an ISP or 'in the cloud' at e.g. Windows Azure or Amazon, in theory allows scaling out by simply instantiating more VMs, however that too introduces the same overhead problems as with the physical servers: suddenly more than 1 instance runs your sites. If a cloud vendor offers its services in the form of VMs, you won't gain much over having a VM at some ISP: the main problems you have to work around are still there: when you spin up more than one VM, your application must be completely stateless at any moment, including the DB sub system, because what's in memory in instance 1 might not be in memory in instance 2. This might sounds trivial but it's not. A lot of the websites out there started rather small: they were perfectly runnable on a single machine with normal memory and CPU power. After all, you don't need a big machine to run a website with even thousands of users a day. Moving these sites to a multi-VM environment will cause a problem: all the in-memory state they use, all the multi-page transitions they use while keeping state across the transition, they can't do that anymore like they did that on a single machine: state is something of the past, you have to store every byte of state in either a DB or in a viewstate or in a cookie somewhere so with the next request, all state information is available through the request, as nothing is kept in-memory. Our example uses a bunch of files in a file system. Using multiple VMs will require that these files move to a cloud storage system which is mounted in each VM so we don't have to store the files on each VM. This might require different file paths, but this change should be minor. What's perhaps less minor is the maintenance procedure in place on the new type of cloud storage used: instead of ftp-ing into a VM, you might have to update the files using different ways / tools. All in all this makes moving an existing website which was written for an environment that's based around a VM (namely .NET with its CLR) overly cumbersome and problematic: it forces you to refactor your website system to be able to be used 'in the cloud', which is caused by the limited way how e.g. Windows Azure offers its cloud services: in blocks of VMs. Offer a scalable, flexible VM which extends with my needs Instead, cloud vendors should offer simply one VM to me. On that VM I run the websites, store my DB and my files. As it's a virtual machine, how this machine is actually ran on physical hardware (e.g. partitioned), I don't care, as that's the problem for the cloud vendor to solve. If I need more resources, e.g. I have more traffic to my server, way more visitors per day, the VM stretches, like I bought a bigger box. This frees me from the problem which comes with multiple VMs: I don't have any refactoring to do at all: I can simply build my website as if it runs on my local hardware server, upload it to the VM offered by the cloud vendor, install it on the VM and I'm done. "But that might require changes to windows!" Yes, but Microsoft is Windows. Windows Azure is their service, they can make whatever change to what they offer to make it look like it's windows. Yet, they're stuck, like Amazon, in thinking in VMs, which forces developers to 'think ahead' and gamble whether they would need to migrate to a cloud with multiple VMs in the future or not. Which comes down to: gamble whether they should invest time in code / architecture which they might never need. (YAGNI anyone?) So the VM we're talking about, is that a low-level VM which runs a guest OS, or is that VM a different kind of VM? The flexible VM: .NET's CLR ? My example websites are ASP.NET based, which means they run inside a .NET appdomain, on the .NET CLR, which is a VM. The only physical OS resource the sites need is the file system, however this too is accessed through .NET. In short: all the websites see is what .NET allows the websites to see, the world as the websites know it is what .NET shows them and lets them access. How the .NET appdomain is run physically, that's the concern of .NET, not mine. This begs the question why Windows Azure doesn't offer virtual appdomains? Or better: .NET environments which look like one machine but could be physically multiple machines. In such an environment, no change has to be made to the websites to migrate them from a local machine or own server to the cloud to get proper scaling: the .NET VM will simply scale with the need: more memory needed, more CPU power needed, it stretches. What it offers to the application running inside the appdomain is simply increasing, but not fragmented: all resources are available to the application: this means that the problem of how to scale is back to where it should be: with the cloud vendor. "Yeah, great, but what about the databases?" The .NET application communicates with the database server through a .NET ADO.NET provider. Where the database is located is not a problem of the appdomain: the ADO.NET provider has to solve that. I.o.w.: we can host the databases in an environment which offers itself as a single resource and is accessible through one connection string without replication overhead on the outside, and use that environment inside the .NET VM as if it was a single DB. But what about memory replication and other problems? This environment isn't simple, at least not for the cloud vendor. But it is simple for the customer who wants to run his sites in that cloud: no work needed. No refactoring needed of existing code. Upload it, run it. Perhaps I'm dreaming and what I described above isn't possible. Yet, I think if cloud vendors don't move into that direction, what they're offering isn't interesting: it doesn't solve a problem at all, it simply offers a way to instantiate more VMs with the guest OS of choice at the cost of me needing to refactor my website code so it can run in the straight jacket form factor dictated by the cloud vendor. Let's not kid ourselves here: most of us developers will never build a website which needs a truck load of VMs to run it: almost all websites created by developers can run on just a few VMs at most. Yet, the most expensive change is right at the start: moving from one to two VMs. As soon as you have refactored your website code to run across multiple VMs, adding another one is just as easy as clicking a mouse button. But that first step, that's the problem here and as it's right there at the beginning of scaling the website, it's particularly strange that cloud vendors refuse to solve that problem and leave it to the developers to solve that. Which makes migrating 'to the cloud' particularly expensive.

    Read the article

  • Saving a modified InfoPath form to its form library

    - by Nathan Lykken
    Within our corporate SharePoint 2007 site, there is a particular form library that contains 10 separate files. 9 of these are either Excel, Word, or PowerPoint files and one of these is an InfoPath 2007 form that serves as a report. After noticing an error within this InfoPath form, I saved this InfoPath form to my local directory and then, within the design mode of InfoPath, I modified this InfoPath form. What is the proper way to save this modified InfoPath form to its form library? Everything that I have tried results in nobody except myself having access to this modified InfoPath form. I can open this InfoPath form without error but when my coworkers try to open this InfoPath form on their machines, they receive this error: “The form cannot be opened because it requires the domain permission level and it currently has restricted permission. To fix this problem, open the form from the location it was published to."

    Read the article

  • F5/BigIP rule to redirect affinity-bound users from INACTIVE pool node to other ACTIVE node

    - by j pimmel
    We have several server nodes set up for the end users of our system and because we don't use any kind of session replication in the app servers, F5 maintains affinity for users with the ACTIVE node the client was first bound to. At times when we want to re-deploy the app, we change the F5 config and take a node out of the ACTIVE pool. Gradually the users filter off and we can deploy, but the process is a bit slow. We can't just dump all the users into a different node because - given the update heavy nature of the user activities - we could cause them to lose changes. That said, there is one URL/endpoint - call it http://site/product/list - which we know, when the client hits it, that we could shove them off the INACTIVE node they had affinity with and onto a different ACTIVE node. We have had a few tries writing an F5 rule along these lines, but haven't had much success so i thought I might ask here, assuming it's possible - I have no reason to think it's not based on what we have found so far.

    Read the article

  • Cross-submission robots.txt for multiple domains on single host

    - by sidd.darko
    We are running a site with multiple languages hosted in a single environment on IIS7. For example, oursite.com - english oursite.de - german oursite.es - spanish This is a single-host environment. All of these sites are in the same application space on the same physical machine. I need to do cross-submission of sitemaps via robots.txt. Looking at the sitemap.org guidelines for this suggest this is possible, but the example indicates different physical machines. Will the following entries in oursite.com/robots.txt work? http://www.oursite.com/sitemap-oursite-de.xml http://www.oursite.com/sitemap-oursite-es.xml

    Read the article

  • Apache virtual host proxy to nginx for ruby

    - by Kevin Brown
    I'm running a few php sites off apache and want to start rails dev. I've installed rvm/nginx and can get my ruby site by going to websiteroot.com:8000... How do I pass ruby.websiteroot.com to websiteroot.com:8000? What's the best way for me to route a subdomain for ruby dev?? I'd switch to nginx completely if it weren't for all my php sites--seems like it's easier to just proxy for ruby. Advice? My nginx config looks like this: server{ listen 8000; server_name website.com; root /home/me/sites/ruby_folder/public; ... } My apache config looks like this: <VirtualHost> ServerName ruby.website.com ProxyPreserveHost on ProxyPass / http://127.0.0.1:8000 ProxyPassReverse / http://127.0.0.1:8000 </VirtualHost>

    Read the article

  • Load balanced asp.net websites and required memory usage

    - by Matt
    Each of my servers has 8Gb RAM and the memory usage hovers around 7Gb. I have a load balancer available to me but at the moment I'm worried that putting my sites through it will cause the platform to fall over. The load balancer would be configured with a sticky round-robin where a new connection is round robin but subsequent connections for the same source ip will remain on the same server (until a limit is reached). Thats all standard stuff. How do I know what memory usage my sites will need across the platform when I put them through the load balancer? Rather than knowing that a site is using 150mb on a particular server I could face a situation where the 150mb is taken up on each of the servers. I know that with only 1 gb free I could have a serious problem on my hands. If I free up some memory then how can I work out what I need to have free to prevent this from happening? Thanks Matt

    Read the article

  • Mark Hurd Believes HR is the Next Major Revenue Driver: Read His Latest LinkedIn Influencer Blog

    - by kristin.jellison
    “Most CEOs realize they need to make some dramatic changes in how they recruit people, align and manage performance, make compensation decisions, and optimize talent,” Oracle President Mark Hurd writes. The key issue, he explains, is that many CEOs aren’t equipping their HR teams with the tools and resources they need to unlock employees’ full value. This oversight is keeping HR organizations walled off from revenue generation and customer engagements—two chief sources of value for a company. So what is a CEO to do, given tightening budgets, a sluggish economy and a rapidly changing workforce? Hurd’s answer: invest in a modern Human Capital Management (HCM) system—one equipped with built-in intelligence and predictive analytics capabilities. To find out more about how to deliver effective HCM transformations, read Mark Hurd’s full article, “How CEOs Can Transform HR into a Revenue Driver” and visit the Oracle HCM Cloud Service site. We also encourage you to log into your LinkedIn account and “Follow” Mark to receive future posts. Share the link to his blog with your networks via Twitter, Facebook and other social media channels. You can also “Like” the post on Oracle’s LinkedIn and Facebook pages, and/or retweet via @Oracle.

    Read the article

  • PHP FastCGI HTTP Error 500 on Windows 7

    - by CJM
    I've just installed PHP (5.3.1) and MySQL (5.1.44) on my development machine. Then I used the Web Platform Installer to install a copy of Joomla and Drupal. However, when I tried to browse either site application, I get a HTTP Error 500: Module FastCgiModule Notification ExecuteRequestHandler Handler PHP_via_FastCGI Error Code 0x00000000 Requested URL http://localhost:808/drupal/index.php Physical Path D:\Projects\drupal\index.php Logon Method Anonymous Logon User Anonymous PHPInfo.php reports that FastCGI is configured (not sure if that is significant). Sure the fact that PHPInfo.php reports anything is perhaps an indication that PHP itself is working...? I'm struggling to know where to look for a solution... Each application appears to be configure similarly to my other [ASP/ASP.NET] applications.

    Read the article

  • Website pages very slow to load on first access

    - by Merianos Nikos
    I have a production web server that makes sites to be very slow when load for first time. After the first time all connections are normal and fast. In the following screen you can see the first test I have run for a random site in my server: and here you can see the result of the second time request: As you can see there is a big change in the second load of the same page. Here is the page load time graph: My problem is that I don't really know anything about Red Had Linux server, and also I don't know where can I start. Can somebody to help me ? I like to find out the solution for the long time "wait" in connection. I know that my question is very minimal, but you can always ask me to give you information about the server.

    Read the article

  • Folder Redirection - Explorer requires manual refresh

    - by Pete
    Hello, I am having an issue where - when a users my documents folder is redirected to a DFS share, windows explorer requires a manual refresh after creating a new folder, file, etc. (Interestingly, not when making a new briefcase) I have tried a number of MS knowledge base articles, a hot-fix and a registry change, all with no success. (( http:// support.microsoft.com/?kbid=823291 ; http:// support.microsoft.com/kb/873392 )) The problem only occurs when going through the my documents icon. If I map a home drive for the user to the exact same location (IE - H:\DFS\user\documents) , open that drive and make new folders, then there is no problem. Mapping my documents to H:\ also resolves the issue, however, as we need folder sync and people logging on off site with cached profiles this is not a workable solution (as H: will not map and there will be no access to their docs.) Has anyone managed to figure a fix for this? Thanks, Pete.

    Read the article

  • Building Web Applications with ACT and jQuery

    - by dwahlin
    My second talk at TechEd is focused on integrating ASP.NET AJAX and jQuery features into websites (if you’re interested in Silverlight you can download code/slides for that talk here). The content starts out by discussing ScriptManager features available in ASP.NET 3.5 and ASP.NET 4 and provides details on why you should consider using a Content Delivery Network (CDN).  If you’re running an external facing site then checking out the CDN features offered by Microsoft or Google is definitely recommended. The talk also goes into the process of contributing to the Ajax Control Toolkit as well as the new Ajax Minifier tool that’s available to crunch JavaScript and CSS files. The extra fun starts in the next part of the talk which details some of the work Microsoft is doing with the jQuery team to donate template, globalization and data linking code to the project. I go into jQuery templates, data linking and a new globalization option that are all being worked on. I want to thank Stephen Walther, Dave Reed and James Senior for their thoughts and contributions since some of the topics covered are pretty bleeding edge right now.The slides and sample code for the talk can be downloaded below.     Download Slides and Samples

    Read the article

  • apache2 force proxy for specific url on a subdomain

    - by Tony G.
    Hi, I have a site that has dynamic virtual subdomains using mod_rewrite, as defined like this: <VirtualHost *:80> ServerName example.com ServerAlias *.example.com DocumentRoot /var/www/example.com/www RewriteEngine on RewriteCond %{HTTP_HOST} ^[^.]+\.examle.com$ RewriteRule ^(.+) %{HTTP_HOST}$1 [C] RewriteRule ^([^.]+)\.example.com(.*) /var/www/example.com/$1$2 </VirtualHost> The problem is that I want a specific url, say subdomain.example.com/CONTROL/ to point back to www.example.com/ using a proxy (not url redirecting). I have tried adding: RewriteRule ^([^.]+)\.example.com/CONTROL(.*) /var/www/example.com/www$2 [P] But that didn't work. Any ideas?

    Read the article

  • MediaTemple tcpsndbuf QoS Alerts

    - by theturninggate
    I'm hosting with MediaTemple on a (dv) Dedicated-Virtual 3.5 server. My site consists of a Wordpress blog, some custom PHP pages (nothing too intense), and I server 500-700 unique visitors per day. Despite my pretty modest numbers, I suffer from regular Apache crashes on account of QoS Alerts, mostly flagged as "tcpsndbuf". MediaTemple support -- usually tops -- has been pretty useless on this matter. I'm looking for answers as to how/why this is happening, advice on how to stop it. My website is a good portion of my livelihood, and downtime equates to lost income. Any and all help much appreciated. -Matt

    Read the article

< Previous Page | 644 645 646 647 648 649 650 651 652 653 654 655  | Next Page >