Search Results

Search found 9617 results on 385 pages for 'fresh dev'.

Page 302/385 | < Previous Page | 298 299 300 301 302 303 304 305 306 307 308 309  | Next Page >

  • Sending data remotely from iPhone native apps to Rails apps

    - by jpartogi
    Dear all, I have decided to develop a native iPhone apps as a compliment to our webapps. Now I am wondering what are my options to send data remotely from the iPhone apps - since the database is online - to our online database. What I can think of on top of my head - since I come from web dev background - is JSON. My webapps is built using Rails, so I figure it would not be difficult to accept JSON request from the iPhone apps. But the next question is, is it difficult to send JSON data remotely from the iPhone apps? If JSON is not recommendable, what are my other options? Thank you so much for the assistance. Really appreciate it.

    Read the article

  • Using makecert for Development SSL

    - by John
    Here's my situation: I'm trying to create a SSL certificate that will be installed on all developer's machine's, along with two internal servers (everything is non-production). What do I need to do to create a certificate that can be installed in all of these places? Right now I've got something along these lines, using the makecert application in Microsoft Visual Studio 8\SDK\v2.0\Bin: makecert -r -pe -n "CN=MySite.com Dev" -b 01/01/2000 -e 01/01/2033 -eku 1.3.6.1.5.5.7.3.1 -ss Root -sr localMachine -sky exchange -sp "Microsoft RSA SChannel Cryptographic Provider" -sy 12 mycert.cer However, I'm not sure as to how to place this .cer file on the other computers, and when I install it on my local machine IIS, everytime I visit a page via https:, I get the security prompt (even after I've installed the certificate). Has anyone done this before?

    Read the article

  • Reviving a deleted file for use in my workspace

    - by John Cowan
    Greetings We run perforce with several users. Each user has their own development website that shows files in their workspace. This is great for making and viewing changes to webpages before submitting them. Sometime ago, we deleted a few pages in Perforce. I would like to revive these pages, but not to make them visible on our live site. I want to view them in my workspace and on my dev site, but I do want to push them out to our live server. In the "depot" tab of my P4 client, I can see the deleted files. I cannot see them in the "Workspace" tab of my client. How can I revive them for use in my Workspace, but not make them live to the world? I'm not a P4 admin so I could use a little guidance. Thanks for any help,

    Read the article

  • How do I ignore the Perl shebang on Windows with Apache 2?

    - by nbolton
    I have set up a local Perl web environment on my Windows machine. The application I'm working on is originally from a Linux server, and so the shebang for source .pl files look like so: #!/usr/bin/perl This causes the following error on my Windows dev machine: (OS 2)The system cannot find the file specified. Is it possible to change my Apache 2 conf so that the shebang is ignored on my Windows machine? Of course I could set the shebang to #!c:\perl\bin\perl.exe, that much is obvious; but the problem comes to deploying the updated files. Clearly it would be very inconvenient to change this back on each deploy. I am using ActivePerl on Windows 7. Update: I should have mentioned that I need to keep the shebang so that the scripts will work on our shared hosting Linux production server. If I did not have this constraint and I didn't have to use the shebang, the obvious answer would be to just not use it.

    Read the article

  • jQuery autocomplete not always working on elements

    - by PoweRoy
    I'm trying to create a greasemonkey script (for Opera) to add autocomplete to input elements found on a webpage but it's not completely working. I first got the autocomplete plugin working: // ==UserScript== // @name autocomplete // @description autocomplete // @include * // ==/UserScript== // Add jQuery var GM_JQ = document.createElement('script'); GM_JQ.src = 'http://jquery.com/src/jquery-latest.js'; GM_JQ.type = 'text/javascript'; document.getElementsByTagName('head')[0].appendChild(GM_JQ); var GM_CSS = document.createElement('link'); GM_CSS.rel = 'stylesheet'; GM_CSS.href = 'http://dev.jquery.com/view/trunk/plugins/autocomplete/jquery.autocomplete.css'; document.getElementsByTagName('head')[0].appendChild(GM_CSS); var GM_JQ_autocomplete = document.createElement('script'); GM_JQ_autocomplete.type = 'text/javascript'; GM_JQ_autocomplete.src = 'http://dev.jquery.com/view/trunk/plugins/autocomplete/jquery.autocomplete.js'; document.getElementsByTagName('head')[0].appendChild(GM_JQ_autocomplete); // Check if jQuery's loaded function GM_wait() { if(typeof window.jQuery == 'undefined') { window.setTimeout(GM_wait,100); } else { $ = window.jQuery; letsJQuery(); } } GM_wait(); function letsJQuery() { $("input[type='text']").each(function(index) { $(this).val("test autocomplete"); }); $("input[type='text']").autocomplete("http://mysite/jquery_autocomplete.php", { dataType: 'jsonp', parse: function(data) { var rows = new Array(); for(var i=0; i<data.length; i++){ rows[i] = { data:data[i], value:data[i], result:data[i] }; } return rows; }, formatItem: function(row, position, length) { return row; }, }); } I see the 'test autocomplete' but using the Opera debugger(firefly) I don't see any communication to my php page. (yes mysite is fictional, but it works here) Trying it on my own page: <body> no autocomplete: <input type="text" name="q1" id="script_1"><br> autocomplete on: <input type="text" name="q2" id="script_2" autocomplete="on"><br> autocomplete off: <input type="text" name="q3" id="script_3" autocomplete="off"><br> autocomplete off: <input type="text" name="q4" id="script_4" autocomplete="off"><br> </body> This works, but when trying on another pages it sometimes won't: e.g. http://spitsnieuws.nl/ works but http://nu.nl and http://dumpert.nl don't work. Trying the autocomplete of jquery ui has more problems: // ==UserScript== // @name autocomplete // @description autocomplete // @include * // ==/UserScript== // Add jQuery var GM_JQ = document.createElement('script'); GM_JQ.src = 'http://ajax.googleapis.com/ajax/libs/jquery/1.4/jquery.min.js'; GM_JQ.type = 'text/javascript'; document.getElementsByTagName('head')[0].appendChild(GM_JQ); var GM_CSS = document.createElement('link'); GM_CSS.rel = 'stylesheet'; GM_CSS.href = 'http://ajax.googleapis.com/ajax/libs/jqueryui/1.8/themes/base/jquery-ui.css'; document.getElementsByTagName('head')[0].appendChild(GM_CSS); var GM_JQ_autocomplete = document.createElement('script'); GM_JQ_autocomplete.type = 'text/javascript'; GM_JQ_autocomplete.src = 'http://ajax.googleapis.com/ajax/libs/jqueryui/1.8/jquery-ui.min.js'; document.getElementsByTagName('head')[0].appendChild(GM_JQ_autocomplete); // Check if jQuery's loaded function GM_wait() { if(typeof window.jQuery == 'undefined') { window.setTimeout(GM_wait,100); } else { $ = window.jQuery; letsJQuery(); } } GM_wait(); // All your GM code must be inside this function function letsJQuery() { $("input[type='text']").each(function(index) { $(this).val("test autocomplete"); }); $("input[type='text']").autocomplete({ source: function(request, response) { $.ajax({ url: "http://mysite/jquery_autocomplete.php", dataType: "jsonp", success: function(data) { response($.map(data, function(item) { return { label: item, value: item } })) } }) } }); } This will work on my html page, http://spitsnieuws.nl and http://dumpert.nl but not on http://nu.nl. (dumpert didn't work on the plugin autocomplete) //http://spitsnieuws.nl <input class="frmtxt ac_input" type="text" id="zktxt" name="query" autocomplete="off"> //http://dumpert.nl <input type="text" name="srchtxt" id="srchtxt"> //http://nu.nl <input id="zoekfield" name="q" type="text" value="Zoek nieuws" onfocus="this.select()" type="text"> Anyone know why the autocomplete functionality doesn't work? Why the request to the php page is not being made? And why I can't add my autocomplete to google.com?

    Read the article

  • CFSocketConnectToAddress and unrecognized selector sent to instance

    - by madmik3
    Hello, I am somewhat new to iPhone dev and I have been getting unrecognized selector when I call CFSocketConnectToAddress in this code. I think it might be something basic that I am doing wrong. Any idea? this is the complete error I get. NSInvalidArgumentException unrecognized selector sent to instance 0x3922170 0x3922170 is the calling class. - (BOOL)connect { CFSocketRef mySocket = CFSocketCreate(kCFAllocatorDefault, PF_INET, SOCK_DGRAM,IPPROTO_UDP, 0, socketCallback, NULL); @try { CFDataRef data = (CFDataRef)[_netService addresses]; CFSocketConnectToAddress(mySocket, data, 500); } @catch (NSException * e) { NSLog([e name]); NSLog([e reason]); } //char joke[] = "Why did the chicken cross the road?"; //CFSocketError err = CFSocketSendData(mySocket, joke, (strlen(joke)+1), 10); return true; } void socketCallback ( CFSocketRef s, CFSocketCallBackType callbackType, CFDataRef address, const void *data, void *info) { }

    Read the article

  • ASP.NET SetAuthCookie weird behaviour

    - by rlb.usa
    Hello SO, I'm trying to do user impersonation for a web application we have. The user selects the user they'd like to emulate/impersonate and then clicks the button which fires this: protected void uxImpersonate_Click(object sender, EventArgs e) { ... FormsAuthentication.SetAuthCookie(uxUserToEmulate.SelectedValue, false); Response.Redirect("Impersonation.aspx"); //reload page manually } We have a dev - test - production server environment and on two servers this works just fine, but on another one, in all browsers, it kicks me to the login screen. What's going on and how can I fix it? We're on ASP.NET 2.0

    Read the article

  • Facebook calling Google App Engine code using GET instead of POST

    - by Nick Gotch
    I've been developing a Facebook app using Google App Engine in Python and the pyfacebook bindings. For weeks everything worked fine but suddenly it stopped. At first I thought it was a code change so I rolled back the entire dev directory to a version I knew worked, but still it failed. It's possible a change I made to the application's settings caused the issue but, if so, I can't figure out what. I've figured out that the problem is that instead of calling the post(self) method of my Main class, Facebook is calling using a GET. Does anyone know why Facebook would use a GET method instead of a POST? It's an IFrame app. Thanks,

    Read the article

  • jquery add a fade to an .addClass

    - by Nik
    How do I fade .addClass in and out. Here is the link - www.aus-media.com/dev/site_BYJ/new-students/ and here is the code - $(document).ready(function() { $('#menu li#Q_01,#menu li#Q_03,#menu li#Q_05,#menu li#Q_07,#menu li#Q_09,#menu li#Q_11,#menu li#Q_13').hover(function() { $(this).addClass('pretty-hover'); }, function() { $(this).removeClass('pretty-hover'); }); }); $(document).ready(function() { $('#menu li#Q_02,#menu li#Q_04,#menu li#Q_06,#menu li#Q_08,#menu li#Q_10,#menu li#Q_12').hover(function() { $(this).addClass('pretty-hover_01'); }, function() { $(this).removeClass('pretty-hover_01'); }); }); Thanks

    Read the article

  • Netbeans Editor Library?

    - by Jeremybub
    Netbeans seems to say in several places that it supports a library to just host the "Netbeans editor" widget in some other program. It has some weird documentation that seems to say a lot, but doesn't really say much about how to use it: http://bits.netbeans.org/dev/javadoc/org-netbeans-modules-editor-lib2/architecture-summary.html I can't seem to find any download for the "Netbeans editor library" (1 or 2), and the documentation they provide says to download the entire mercurial repository, which doesn't really help me, since it doesn't tell me what is part of this "library" and what is not. If someone could point me to a download for this library, or some minimal documentation about how to use it, that would be great. I've already seen the blog post here, but it doesn't really help with getting the library, and it seems to be talking about classes which I can't find in the Netbeans sources I downloaded (Maybe a different version?)

    Read the article

  • Using clojure.contrib functions in slime REPL

    - by Tyler
    I want to use the functions in the clojure.contrib.trace namespace in slime at the REPL. How can I get slime to load them automatically? A related question, how can I add a specific namespace into a running repl? On the clojure.contrib API it describes usage like this: (ns my-namespace (:require clojure.contrib.trace)) But adding this to my code results in the file being unable to load with an "Unable to resolve symbol" error for any function from the trace package. I use leiningen 'lein swank' to start the ServerSocket and the project.clj file looks like this (defproject test-project "0.1.0" :description "Connect 4 Agent written in Clojure" :dependencies [[org.clojure/clojure "1.2.0-master-SNAPSHOT"] [org.clojure/clojure-contrib "1.2.0-SNAPSHOT"]] :dev-dependencies [[leiningen/lein-swank "1.2.0-SNAPSHOT"] [swank-clojure "1.2.0"]]) Everything seems up to date, i.e. 'lein deps' doesn't produce any changes. So what's up?

    Read the article

  • .NET: Preserving some, but not all query params during redirect

    - by kasper pedersen
    Hi all, Could someone tell me if the code below would achieve what I want, which is: Check if the query parameters 'return_path' and/or 'user_state' are present in the query string, and if so append them to the query string of the redirect URI. As I'm not a .NET dev and don't have a server to test this on, I was hoping someone could give me some feedback. ArrayList vars = new ArrayList(); vars.Add("return_path"); vars.Add("user_state"); string newUrl = "/new/request/uri" + "?"; ArrayList params = new ArrayList(); foreach ( string key in Request.QueryString ) { if (vars.contains(key)) { params.Add(key + "=" + HttpUtility.URLPathEncode(Request.QueryString[key])); } } String[] paramArr = (String[]) params.ToArray( typeof (string) ); String queryString = String.join("&", paramArr); Response.Redirect(newUrl); Thank you :)

    Read the article

  • Strange permission errors with Windows Server 2008

    - by Spirit
    I just don't know a better way to describe my issue that is driving me nuts. I am trying to establish a test domain with virtual machines on a box that has Win7 with VMwware workstation installed. The purpouse with this domain will be so that we can try and test different situations before they go into the production network. I build a VM with WinSrv2008R2 and I am using that VM as a template to make other servers for the domain by making clones of it. Now I raise a DC with one clone and a member server with another clone - I add the server to the domain. I am following a standard procedure as always (it is not my first domain). Then I make an admin account and I am adding the admin to be a member of the Domain and Enterprise Admins group. That admin is admin with full priviledges on the DC.. no problem there. But on the other server has ... somewhat half the privileges and I cant log in via RDP. I tryed with another account. Same issues. For example (with half the privileges): I can't open the Even Viewer if I go via Start - Administrative Tools - Event Viewer. But I can open the Even Viewer via the server manager. You can notice this on the image below. I mean WTF??? I am going crazy, I haven't experienced anything similar in my three years of expertise. I already lost 3 days troubleshooting this. Could this be related with the cloning? Perhaps if I make fresh installs of WinSrv2008 there won't be any problems? I've had raised test domains as VMs on other occasions before, and there weren't any problems then. This is VMware Workstation 8. I've made clones before, on Workstation 7 it didn't had any problems. Anyone has any ideas? UPDATE: This is the info from the event log when I try to access via RDP: An account failed to log on. Subject: Security ID: NULL SID Account Name: - Account Domain: - Logon ID: 0x0 Logon Type: 3 Account For Which Logon Failed: Security ID: NULL SID Account Name: pat.coleman Account Domain: lab Failure Information: Failure Reason: Domain sid inconsistent. Status: 0xc000006d Sub Status: 0xc000019b

    Read the article

  • Why are my hard drives failing?

    - by WishCow
    I have a small Ubuntu server running at home, with 2 HDDs. There are two software raids (raid1) on the disks, managed by mdadm, which I believe is irrelevant, but mentioning it anyway. Both of the HDDs are Western Digital, and have been used for around 2 years, when one of them started making clicking noises, and died. I figured that maybe it's natural after 2 years, so I bought a new one, and resynced the raid arrays. After about a month, the other drive also died. I didn't get suspicious, since both drives have been bought at the same time, it's not that surprising to see both of them near each other, so I bought another one. So far, 2 old drives failed, and 2 brand new in the system. After one month, one of the new drives died. This is when it started getting suspicious. Since the PC was put together from some really old parts (think AthlonXP), I figured that maybe the motherboard's SATA controller is the culprit. Of course you can't switch parts easily in an old PC like this, so I bought a whole system, new MB, new CPU, new RAM. Took the just failed drive back, since it was under warranty, and got it replaced. So it is up to 2 failed drives from the old ones, and 1 failed drive from the new ones. No problems, for 1 month. After that errors were creeping up again in /var/log/messages, and mdadm was reporting raid array failures. I started tearing my hair out. Everything is new in the system, it's up to the third brand new HDD, it's simply not possible that all of the new drives that I bought were faulty. Let's see what is still common... the cables. Okay, long shot, let's replace the SATA cables. Take HDD back, smile to the guy at the counter and say that I'm really unlucky. He replaces the HDD. I come home, one month passes and one of HDDs fails, again. I'm not joking. Two of the brand new HDDs have failed. Maybe it's a bug in the OS. Let's see what the manufacturer's testing tool says. Download testing tool, burn it to a CD, reboot, leave HDD testing overnight. Test says that the drive is faulty, and I should back up everything, if I still can. I don't know what's happening, but it does not look like a software problem, something is definitely thrashing the HDDs. I should mention now, that the whole system is in a shoebox. Since there are a load of "build your own ikea case" stuff, I thought there shouldn't be any problems throwing the thing in a box, and stuffing it away somewhere. The box is well ventilated, but I thought that just maybe the drives were overheating. There is no other possible answer to this. So I took the HDD back, and got it replaced (for the 3rd time), and bought HDD coolers. And just now, I have heard the sound of doom. click click whizzzzzzzzz. SSH into the box: You have new mail! mail r 1 DegradedArrayEvent on /dev/md0 ... dmesg output: [47128.000051] ata3: lost interrupt (Status 0x50) [47128.000097] end_request: I/O error, dev sda, sector 58588863 [47128.000134] md: super_written gets error=-5, uptodate=0 [48043.976054] ata3: lost interrupt (Status 0x50) [48043.976086] ata3.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen [48043.976132] ata3.00: cmd c8/00:18:bf:40:52/00:00:00:00:00/e1 tag 0 dma 12288 in [48043.976135] res 40/00:00:00:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout) [48043.976208] ata3.00: status: { DRDY } [48043.976241] ata3: soft resetting link [48044.148446] ata3.00: configured for UDMA/133 [48044.148457] ata3.00: device reported invalid CHS sector 0 [48044.148477] ata3: EH complete Recap: No possibility of overheating 6 drives have failed, 4 of those have been brand new. I'm not sure now that the original two have been faulty, or suffered the same thing that the new ones. There is nothing common in the system, apart from the OS which is Ubuntu Karmic now (started with Jaunty). New MB, new CPU, new RAM, new SATA cables. No, the little holes on the HDD are not covered I'm crying. Really. I don't have the face to return to the store now, it's not possible for 4 drives to fail under 4 months. A few ideas that I have been thinking: Is it possible that I fuck up something when I partition and resync the drives? Can it be so bad that it physicaly wrecks the drive? (since the vendor supplied tool says that the drive is damaged) I do the partitoning with fdisk, and use the same block size for the raid1 partitions (I check the exact block sizes with fdisk -lu) Is it possible that the linux kernel or mdadm, or something is not compatible with this exact brand of HDDs, and thrashes them? Is it possible that it may be the shoebox? Try placing it somewhere else? It's under a shelf now, so humidity is not a problem either. Is it possible that a normal PC case will solve my problem (I'm going to shoot myself then)? I will get a picture tomorrow. Am I just simply cursed? Any help or speculation is greatly appreciated. Edit: The power strip is guarded against overvoltage. Edit2: I have moved inbetween these 4 months, so the possibility of the cause being "dirty" electricity in both places, is very low. Edit3: I have checked the voltages in the BIOS (couldn't borrow a multimeter), and they are all seem correct, the biggest discrepancy is in the 12V, because it's supplying 11.3. Should I be worried about that? Edit4: I put my desktop PC's PSU into the server. The BIOS reported much more accurate voltage readings, and also it has successfully rebuilt the raid1 array, which took some 3-4 hours, so I feel a little positive now. Will get a new PSU tomorrow to test with that. Also, attaching the picture about the box: (disregard the 3rd drive)

    Read the article

  • Do MyISAM holes get filled in automatically?

    - by NNN
    When you run a delete on a MyISAM table, it leaves a hole in the table until the table is optimized. This affects concurrent inserts. On the default setting, concurrent_inserts only work for tables without holes. However, in the documentation for MyISAM, under the concurrent_insert section it says: Enables concurrent inserts for all MyISAM tables, even those that have holes. For a table with a hole, new rows are inserted at the end of the table if it is in use by another thread. Otherwise, MySQL acquires a normal write lock and inserts the row into the hole. http://dev.mysql.com/doc/refman/5.0/en/server-system-variables.html#sysvar_concurrent_insert Does that mean MyISAM automatically fills in holes whenever a new row is insert into the table? Previously I thought the holes would not be fixed until you OPTIMIZE a table.

    Read the article

  • Coordinating the logging output from a web app installed in a SharePoint farm.

    - by Kelly French
    We are deploying web parts to SharePoint 2007 and would like to include logging (log4net). The ideal solution would be to use a database appender to avoid the problems with knowing which actual server is executing the web part. This questions has been helpful: http://stackoverflow.com/questions/219668/sharepoint-and-log4net. I've got log4net working in a stand-alone web app using Visual Studio dev server using the web.config for the log4net settings and a file appender for the output. I'd like to transition to SharePoint and still use the log file output so I can make sure it's all working first, then change the config around to log to a database. Is this going to be too much trouble? How have other developers added log4.net into their solutions for SharePoint?

    Read the article

  • Why does my Excel add-in only half work?

    - by Dan Crowther
    I've created an Excel add-in using Visual Studio 2008. It has a ribbon, a bunch of panes and code that adds sheets and ranges and gets information scraped from a web page. When I run it on my dev PC it works perfectly. I used the Publish command to publich it and installed on a Windows XP virtual PC. The installation seemed fine and when I open Excel I see my ribbon. If I click a button that shows a pane, up pops the pane. If I enter some details into the pane that should create a range and populate it with data from a web page, the range is created but the web page is not visited (I have tested that I have connectivity). One of my buttons adds a hidden worksheet and another displays or hides that sheet. One of these buttons is not working. I've tried everything I can think of. I'm wondering if there are any permissions or trust issues I need to deal with?

    Read the article

  • How to debug GWT using Ant

    - by Phuong Nguyen de ManCity fan
    I know that the job would be simpler if I use Google Plugin for Eclipse. However, in my situation, I heavily adapted Maven and thus, the plugin cannot suit me. (In fact, it gave me the whole week of headache). Rather, I relied on a ant script that I learned from http://code.google.com/webtoolkit/doc/latest/tutorial/appengine.html The document was very clear; I follow the article and successfully invoked DevMode using ant devmode. However, the document didn't tell me about debugging GWT (like Google Plugin for Eclipse can do). Basically, I want to add some parameter to an ant task that expose a debug port (something like (com.google.gwt.dev.DevMode at localhost:58807)) so that I can connect my eclipse to. How can I do that?

    Read the article

  • Increase Timeout for remote sessions in Debian 5 Lenny

    - by Ash
    I always get a remote connection time out when using PuTTy and also when i send emails with attachments from a mail sever installed on Debian. I always get this error. I'm not sure if this is firewall or the new Debian 5 installation which i made. Is there any settings i need to fix after fresh install. Any inputs are highly appreciated. This error is pulling my brains out. Thanks. Error: 2011-01-10 15:21:13,454 INFO [btpool0-23://69.19.19.89/service/upload?fmt=extended] [[email protected];mid=72;ip=10.10.01.78;ua=Mozilla/5.0 (Windows;; U;; Windows NT 5.2;; en-US;; rv:1.9.2.13) Gecko/20101203 Firefox/3.6.13 (.NET CLR 3.5.30729);] FileUploadServlet - File upload failed org.apache.commons.fileupload.FileUploadBase$IOFileUploadException: Processing of multipart/form-data request failed. timeout at org.apache.commons.fileupload.FileUploadBase.parseRequest(FileUploadBase.java:367) at org.apache.commons.fileupload.servlet.ServletFileUpload.parseRequest(ServletFileUpload.java:126) at com.zimbra.cs.service.FileUploadServlet.handleMultipartUpload(FileUploadServlet.java:430) at com.zimbra.cs.service.FileUploadServlet.doPost(FileUploadServlet.java:412) at javax.servlet.http.HttpServlet.service(HttpServlet.java:727) at com.zimbra.cs.servlet.ZimbraServlet.service(ZimbraServlet.java:181) at javax.servlet.http.HttpServlet.service(HttpServlet.java:820) at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1166) at com.zimbra.cs.servlet.SetHeaderFilter.doFilter(SetHeaderFilter.java:79) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157) at org.mortbay.servlet.UserAgentFilter.doFilter(UserAgentFilter.java:81) at org.mortbay.servlet.GzipFilter.doFilter(GzipFilter.java:132) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157) at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:388) at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:218) at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182) at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:765) at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:418) at org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230) at org.mortbay.jetty.handler.HandlerCollection.handle(HandlerCollection.java:114) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) at org.mortbay.jetty.handler.rewrite.RewriteHandler.handle(RewriteHandler.java:230) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) at org.mortbay.jetty.handler.DebugHandler.handle(DebugHandler.java:77) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) at org.mortbay.jetty.Server.handle(Server.java:326) at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:543) at org.mortbay.jetty.HttpConnection$RequestHandler.content(HttpConnection.java:939) at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:755) at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:218) at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:405) at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:413) at org.mortbay.thread.BoundedThreadPool$PoolThread.run(BoundedThreadPool.java:451) Caused by: org.mortbay.jetty.EofException: timeout at org.mortbay.jetty.HttpParser$Input.blockForContent(HttpParser.java:1172) at org.mortbay.jetty.HttpParser$Input.read(HttpParser.java:1122) at org.apache.commons.fileupload.MultipartStream$ItemInputStream.makeAvailable(MultipartStream.java:977) at org.apache.commons.fileupload.MultipartStream$ItemInputStream.read(MultipartStream.java:887) at java.io.InputStream.read(InputStream.java:85) at org.apache.commons.fileupload.util.Streams.copy(Streams.java:94) at org.apache.commons.fileupload.util.Streams.copy(Streams.java:64) at org.apache.commons.fileupload.FileUploadBase.parseRequest(FileUploadBase.java:362) ... 33 more

    Read the article

  • ASP.NET site freezing up, showing odd text at top of the page while loading, on one server

    - by MGOwen
    I have various servers (dev, 2 x test, 2 x prod) running the same asp.net site. The test and prod servers are in load-balanced pairs. One of these pairs is exhibiting some kind of (super) slowdown or freezing every other page load or so. Sometimes a line of text appears at the very top of the page which looks something like: 00 OK Date: Thu, 01 Apr 2010 01:50:09 GMT Server: Microsoft-IIS/6.0 X-Powered_By: ASP.NET X-AspNet-Version:2.0.50727 Cache-Control:private Content-Type:text/html; charset=ut (the beginning and end are "cut off".) Has anyone seen anything like this before? Any idea what it means or what's causing it?

    Read the article

  • Proper Imaging Procedures to Restore and Deploy Image with Separate System Reserved Partition

    - by alharaka
    UPDATE: As per my experience here, no one responded. If I do not hear back from TechNet forum members about it, I will post a bounty here, if it makes a difference. I have banged my head against a wall for what seems like all week. I am going to explain my simple procedure, and how none of it, absolutely none, seems to work afterword despite few alternatives and everyone on the internet telling assuming this is how to do it. Diskpart Commands to Create FS Structure REM Select the disk targeted for deployment. REM REM NOTE: Usually disk 0, but drive failure can make it external USB REM media. This will erase the drive regardless! select disk 0 REM Remove previous formatting. clean REM Create System Reserved partition bootloader and files. create partition primary size=100 REM Format the volume format fs=ntfs label="System Reserved" quick override noerr REM Assign the System Reserved partition the D: mount for now assign letter=C REM The main system partition, size not specified to occupy whole drive. create partition primary REM Format the volume format fs=ntfs quick override noerr REM Assign the OS partition the D: mount for now assign letter=D REM Make this the active/bootable partition. sel disk 0 sel partition 1 active REM Close out the diskpart session. exit Now, I thought this was madness, but it turns out the System Reserved partition and standard "System Partition" (C:, commonly both the boot and system volumes where you find the Windows directory AND the bootmgr/ntldr hardware files, this is where Windows 7 diverges) as mounted in the Windows PE session where I run these commands do not matter. See reference here. Since this needs to be BitLocker-ready, enter this crappy System Reserved partition that is separate 100MB of awesome that goes before the regular boot volume. I do this, then I proceed to the next step. Deploy System Reserved and Normal System Images REM C is still the "System Reserved Partition", and the image is just like it sounds. imagex /apply G:\images\systemreserved.wim 1 C: REM D is now what will be the C: system partition on reboot, supposedly. imagex /apply G:\images\testimage.wim 1 D: Reboot the system Now, the images I just captured should look good. This is not even sysprepped, but reapplying the same fscking image I prepared on the same reference workstation hours before. Problem is I get 0xc000000e could not detect the accessible boot device \Windows\system32\winload.exe or different kinds of nonsense revolving around being able to find the boot volume with all the right files. I try different variations of things, now none of them work. I tried repairs with bcdboot, with a fresh System Reserved partition or not, bootrec, and maually editing the damn BCD store with bcdedit. I tried finalizing the above process with and without bootsect /nt60 C: /force. I need to wrap up and automate this procedure. What am I doing wrong that does not make the image happy, but really just miserable.

    Read the article

  • Fixing warning from git

    - by japancheese
    I've been doing a workflow of making a git repository on a remote central repository, cloning that repo on my local dev machine, doing some work, and then pushing the changes back to the same repo on the remote server. However, and I believe this was after an update I did to git recently, after pushing up a change, I'm getting the following warning: Counting objects: 2724, done. Delta compression using up to 2 threads. Compressing objects: 100% (2666/2666), done. Writing objects: 100% (2723/2723), 5.90 MiB | 313 KiB/s, done. Total 2723 (delta 219), reused 0 (delta 0) warning: updating the currently checked out branch; this may cause confusion, as the index and working tree do not reflect changes that are now in HEAD. Can someone explain to me exactly what this warning means, and what I'm doing wrong in my workflow to not receive this warning?

    Read the article

  • Installing PostGIS on Windows

    - by Cornflake
    I've installed PostgreSQL and PostGIS, and now I'm trying to follow these instructions: http://docs.djangoproject.com/en/dev/ref/contrib/gis/install/#spatialdb-template But I keep getting the following error, both in the command prompt and in Cygwin: C:\Users\Home>createdb -E UTF8 template_postgis createdb: could not connect to database postgres: could not connect to server: No such file or directory Is the server running locally and accepting connections on Unix domain socket "/tmp/.s.PGSQL.5432"? And I know PostgreSQL is running, because I'm using it right now! Installing open source applications can sometimes be so frustrating... I'll be very grateful for your help!

    Read the article

  • How would one use Cocos2d to create a game like this.

    - by John Stewart
    http://itunes.apple.com/us/app/angry-birds/id343200656?mt=8&ign-mpt=uo%3D6 So I am getting started with this all game dev thing on iphone and I decided that I will start playing with Cocos2d as my starting engine. Now just so i have a goal in mind, I picked angry birds as my initial target of what sort of game play would I like to learn to build. This is not going to be a market release game. This is totally going to be learning purposes only. So to start off my question is: Would something like this be achievable using Cocos2d? How would I go about building the physics for this? How can one do a screen scroll like the way they do in cocos2d? (any example code would be great) This is just to start off. If you have any particular questions please do add to this question.

    Read the article

  • Appengine not liking my .jspx files

    - by Hans Westerbeek
    I have a little app that runs fine on local dev appengine, but appengine itself is not processing my .jspx files. The jspx files are in WEB-INF so they should not be excluded by appengine (as a static resource) I am using Apache Tiles to define my views. So the html produced looks like this: <html xmlns:jsp="http://java.sun.com/JSP/Page" xmlns:c="http://java.sun.com/jsp/jstl/core" xmlns:tiles="http://tiles.apache.org/tags-tiles" > <jsp:output omit-xml-declaration="yes"/> <jsp:directive.page contentType="text/html;charset=UTF-8" /> <jsp:directive.page isELIgnored="false"/> (etc etc) How can I solve this problem?

    Read the article

< Previous Page | 298 299 300 301 302 303 304 305 306 307 308 309  | Next Page >