Search Results

Search found 16899 results on 676 pages for 'local'.

Page 601/676 | < Previous Page | 597 598 599 600 601 602 603 604 605 606 607 608  | Next Page >

  • Peculiar JRE behaviour running RMI server under load, should I worry?

    - by darri
    I've been developing a minimalistic Java rich client CRUD application framework for the past few years, mostly as a hobby but also actively using it to write applications for my current employer. The framework provides database access to clients either via a local JDBC based connection or a lightweight RMI server. Last night I started a load testing application, which ran 100 headless clients, bombarding the server with requests, each client waiting only 1 - 2 seconds between running simple use cases, consisting of selecting records along with associated detail records from a simple e-store database (Chinook). This morning when I looked at the telemetry results from the server profiling session I noticed something which to me seemed strange (and made me keep the setup running for the remainder of the day), I don't really know what conclusions to draw from it. Here are the results: Memory GC activity Threads CPU load Interesting, right? So the question is, is this normal or erratic? Is this simply the JRE (1.6.0_03 on Windows XP) doing it's thing (perhaps related to the JRE configuration) or is my framework design somehow causing this? Running the server against MySQL as opposed to an embedded H2 database does not affect the pattern. I am leaving out the details of my server design, but I'll be happy to elaborate if this behaviour is deemed erratic.

    Read the article

  • How to get the recently viewed pictures on the web browser?

    - by quantity
    I want to retrieve the recently viewed pictures from IE. I know that all the files from IE exist in the internet temporary directory, commonly with the path like "C:\Documents and Settings[account]\Local Settings\Temporary Internet Files". Here something strange for me comes. I wrote a program of C++ to retrieve the directory above, and the result says it contains three subdirectories and one file. These subdirectories are Content.IE5, OIS, and OLK145, each contains lots of pictures, which I think are the ones I browsed recently on the web. The only file is desktop.ini, which is not my concern. However, when I open the directory in the file system, there are no subdirectories at all, but a lot of files, different from the ones in the subdirectories retrieved by the program. I have several questions. Frist of all, why the content of the temorary internet files seems different? Which is the actual situation about the directory? Second, I found that in filesystem explorer, the files in the directory seem like some link to the ones on the web, not physically exist on my computer, is this true? Finally, how can I get the pictures viewed from IE recently with C++, as well as their original url?

    Read the article

  • Need help optimizing this Django aggregate query

    - by Chris Lawlor
    I have the following model class Plugin(models.Model): name = models.CharField(max_length=50) # more fields which represents a plugin that can be downloaded from my site. To track downloads, I have class Download(models.Model): plugin = models.ForiegnKey(Plugin) timestamp = models.DateTimeField(auto_now=True) So to build a view showing plugins sorted by downloads, I have the following query: # pbd is plugins by download - commented here to prevent scrolling pbd = Plugin.objects.annotate(dl_total=Count('download')).order_by('-dl_total') Which works, but is very slow. With only 1,000 plugins, the avg. response is 3.6 - 3.9 seconds (devserver with local PostgreSQL db), where a similar view with a much simpler query (sorting by plugin release date) takes 160 ms or so. I'm looking for suggestions on how to optimize this query. I'd really prefer that the query return Plugin objects (as opposed to using values) since I'm sharing the same template for the other views (Plugins by rating, Plugins by release date, etc.), so the template is expecting Plugin objects - plus I'm not sure how I would get things like the absolute_url without a reference to the plugin object. Or, is my whole approach doomed to failure? Is there a better way to track downloads? I ultimately want to provide users some nice download statistics for the plugins they've uploaded - like downloads per day/week/month. Will I have to calculate and cache Downloads at some point? EDIT: In my test dataset, there are somewhere between 10-20 Download instances per Plugin - in production I expect this number would be much higher for many of the plugins.

    Read the article

  • Recreating http request with cURL incl. files

    - by Toby
    I consistently get the error 'failed creating formpost data' from the below code, the same thing works perfectly on my local testing server, but on my shared host it throws the error. The sample part is just to simulate building the array with both files and non-file data. Essentially all I'm trying to do here is redirect the same http request to another server, but I'm running into so many troubles. $count=count($_FILES['photographs']['tmp_name']); $file_posts=array('samplesample' => 'ladeda'); for($i=0;$i<$count;$i++) { if(!empty($_FILES['photographs']['name'][$i])) { $fn = genRandomString(); $file_posts[$fn] = "@".$_FILES['photographs']['tmp_name'][$i]; } } $ch = curl_init(); curl_setopt($ch,CURLOPT_URL,"http://myurl/wp-content/plugins/autol/rec.php"); curl_setopt($ch,CURLOPT_USERAGENT, "Mozilla/4.0 (compatible; MSIE 5.01; Windows NT 5.0)"); curl_setopt($ch,CURLOPT_HEADER,TRUE); curl_setopt($ch,CURLOPT_POST,TRUE); curl_setopt($ch,CURLOPT_POSTFIELDS,$file_posts); curl_exec($ch); print curl_error($ch); curl_close($ch);

    Read the article

  • PGU HTML Renderer can't render most sites

    - by None
    I am trying to make a web browser using pygame. I am using PGU for html rendering. It works fine when I visit simple sites, like example.com, but when I try and load anything more complex that uses an html form, like google, I get this error: UnboundLocalError: local variable 'e' referenced before assignment I looked in the PGU html rendering file and found this code segment: def start_input(self,attrs): r = self.attrs_to_map(attrs) params = self.map_to_params(r) #why bother #params = {} type_,name,value = r.get('type','text'),r.get('name',None),r.get('value',None) f = self.form if type_ == 'text': e = gui.Input(**params) self.map_to_connects(e,r) self.item.add(e) elif type_ == 'radio': if name not in f.groups: f.groups[name] = gui.Group(name=name) g = f.groups[name] del params['name'] e = gui.Radio(group=g,**params) self.map_to_connects(e,r) self.item.add(e) if 'checked' in r: g.value = value elif type_ == 'checkbox': if name not in f.groups: f.groups[name] = gui.Group(name=name) g = f.groups[name] del params['name'] e = gui.Checkbox(group=g,**params) self.map_to_connects(e,r) self.item.add(e) if 'checked' in r: g.value = value elif type_ == 'button': e = gui.Button(**params) self.map_to_connects(e,r) self.item.add(e) elif type_ == 'submit': e = gui.Button(**params) self.map_to_connects(e,r) self.item.add(e) elif type_ == 'file': e = gui.Input(**params) self.map_to_connects(e,r) self.item.add(e) b = gui.Button(value='Browse...') self.item.add(b) def _browse(value): d = gui.FileDialog(); d.connect(gui.CHANGE,gui.action_setvalue,(d,e)) d.open(); b.connect(gui.CLICK,_browse,None) self._locals[r.get('id',None)] = e I got the error in the last line, because e wasn't defined. I am guessing the reason for this is that the if statement that checks the type of the input and creates the e variable didn't match anything. I added a line to print the _type variable and I got 'hidden' when i tried google and apple. Is there any way to render form items that have the type 'hidden' with PGU?

    Read the article

  • HTTP 401.3 when PUT, DELETE to ADO.NET Data Service (.svc)

    - by Nate
    I have an ADO.NET Data Service (we'll call it service.svc). When I deploy it to an IIS 6 site with Integrated Windows Authentication turned on, all requests (GET, POST, PUT, and DELETE) work fine for me, because I am an administrator on the box. However, when a non-admin user hits the service, only GET and POST requests work. When they try a PUT or DELETE request, they get an HTTP 401.3 "Access is Denied" error: "Error message 401.3: You do not have permission to view this directory or page using the credentials you supplied (access denied due to Access Control Lists). Ask the web server's administrator to give you access to '...\service.svc'." If I give the "Authenticated Users" local group write access to the .svc file, everything works as it should, but I really don't want to do this (and don't think I should have to do this to get this to work). In fact, I'm confused as to why changing the file permissions would affect this at all, but it definitely seems to be the problem. I've found a couple of different suggestions to fix somewhat similar problems in the Microsoft forums (Here, and I would post more links, but am being told that new users can only post one link in a post), but none of the solutions help. Any help is much appreciated. I am certainly no IIS expert, and this one has got me stumped.

    Read the article

  • View Source and Chrome Developer Tools showing different output

    - by patricksweeney
    I have a page located here. Viewing it in Chrome and Firefox show a really small h1 title, and also it changes color as if it is a link. The template that generates everything looks exactly how it should be. When diagnosing the issue, the relevant section of code looks like this when I go to view source: <div class="page-heading"> <h1>Title Here</h1> </div> However, when I go to view it in Chrome's Developer tools, it is throwing in extraneous malformed anchor tags, which is obviously causing the hovering behavior. This is what it looks like to the dev tools: <div class="page-heading"> <h1> <a style="font-family: arial; font-size: 9px" <="" a="">Title Here</a> </h1> </div> In addition, when viewing a local copy of the site, the output shown in the dev tools is the same as viewing the source and they both render correctly locally. Oddly enough, all version of IE render it correctly. The current version of both Chrome and Firefox both render it weirdly. Initially I thought it may be a user agent stylesheet problem, but if anything the CSS is fine, it's the HTML that is malformed.

    Read the article

  • Designers, Expression or SharePoint Designer, and real source control

    - by David Lively
    I'm trying desperately to move from VSS to a real source control system. Options include TFS and SVN. My designers need to keep their ability to modify source files and instantly preview their changes in a browser without having to commit their changes. Using FPSE with VSS, this works flawlessly, since saving a file causes the copy in the working folder on the dev server to be updated, so they can just save and refresh their browser which is pointed at the dev server. The site in question consists of 350k+ lines of classic ASP code and some new ASP.NET MVC. They only need to be able to modify views within the MVC code, not C#. Though Expression includes a version of Cassini for local debugging, Cassini does not support classic ASP. Surely someone has solved this problem before. It can't be necessary to install IIS on each designer's machine (this is absolutely untenable). I need a way to have a common working folder on a dev webserver updated whenever someone saves a file locally, just like using FPSE. I'd rather not write an FPSE proxy that knows how to talk to TFS/SVN. Any suggestions? (I know I've asked this question in the past, but I haven't yet found a solution.)

    Read the article

  • SVN Version Rollback Question

    - by phimuemue
    Hello, I'm using SVN (TortoiseSVN) and often came into the following situation: I wanted to discard any changes since a specific (old) revision and turn all files back to this specific (old) version. Then I wanted to work further as if this specific (old) revision was the newest one, i.e. I wanted to be able to commit the specific old revision as a new revision. I found several solutions for this problem (for example stackoverflow.com/questions/402159/roll-back-or-revert-entire-svn-repository-to-an-older-revision or rustyrazorblade.com/2007/04/how-to-roll-back-commits-to-an-earlier-version-of-a-repository-in-svn/). However, I wonder if there is a simple way to roll back to a specific revision. I thought version control is just good for such things (or am I misunderstanding something?). Is there a simple command/button/etc. that takes an updates my local repository to an old revision and declares it to be the newest one? Since I suppose that there is no "built-in" function to do this, I wanted to know what reason lead the developers to the decision not to integrate this feature. Does anybody know this?

    Read the article

  • Unable to Connect to Management Studio Server

    - by Phil Hilliard
    I have a nasty situation. I am using Microsoft SQL Server Management Studio Express edition locally on my pc for testing, and once tested I upload database changes to a remote server. I have a situation where I deleted the Default Database on my local machine, and instead of searching hard enough to find an answer to that problem, I uninstalled and reinstalled Management Studio. Since then Management Studio has not been able to connect to the server. Is there any help (or hope for me for that matter), out there????? The following is the detailed error message: =================================== Cannot connect to LENOVO-E7A54767\SQLEXPRESS. =================================== A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 26 - Error Locating Server/Instance Specified) (.Net SqlClient Data Provider) ------------------------------ For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&EvtSrc=MSSQLServer&EvtID=-1&LinkId=20476 ------------------------------ Error Number: -1 Severity: 20 State: 0 ------------------------------ Program Location: at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) at System.Data.SqlClient.TdsParser.Connect(ServerInfo serverInfo, SqlInternalConnectionTds connHandler, Boolean ignoreSniOpenTimeout, Int64 timerExpire, Boolean encrypt, Boolean trustServerCert, Boolean integratedSecurity, SqlConnection owningObject) at System.Data.SqlClient.SqlInternalConnectionTds.AttemptOneLogin(ServerInfo serverInfo, String newPassword, Boolean ignoreSniOpenTimeout, Int64 timerExpire, SqlConnection owningObject) at System.Data.SqlClient.SqlInternalConnectionTds.LoginNoFailover(String host, String newPassword, Boolean redirectedUserInstance, SqlConnection owningObject, SqlConnectionString connectionOptions, Int64 timerStart) at System.Data.SqlClient.SqlInternalConnectionTds.OpenLoginEnlist(SqlConnection owningObject, SqlConnectionString connectionOptions, String newPassword, Boolean redirectedUserInstance) at System.Data.SqlClient.SqlInternalConnectionTds..ctor(DbConnectionPoolIdentity identity, SqlConnectionString connectionOptions, Object providerInfo, String newPassword, SqlConnection owningObject, Boolean redirectedUserInstance) at System.Data.SqlClient.SqlConnectionFactory.CreateConnection(DbConnectionOptions options, Object poolGroupProviderInfo, DbConnectionPool pool, DbConnection owningConnection) at System.Data.ProviderBase.DbConnectionFactory.CreateNonPooledConnection(DbConnection owningConnection, DbConnectionPoolGroup poolGroup) at System.Data.ProviderBase.DbConnectionFactory.GetConnection(DbConnection owningConnection) at System.Data.ProviderBase.DbConnectionClosed.OpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory) at System.Data.SqlClient.SqlConnection.Open() at Microsoft.SqlServer.Management.UI.VSIntegration.ObjectExplorer.ObjectExplorer.ValidateConnection(UIConnectionInfo ci, IServerType server) at Microsoft.SqlServer.Management.UI.ConnectionDlg.Connector.ConnectionThreadUser()

    Read the article

  • MySQL connection attempt works fine in 5.2.9 but not in 5.3.0 - Help?

    - by Rich
    Hi, I'm having trouble making a secondary MySQL connection (to a separate, external DB) in my code. It works fine in PHP 5.2.9 but fails to connect in PHP 5.3.0. I'm aware of (at least some) of the changes needed to make successful MySQL connections in the newer version of PHP, and have succeeded before, so I'm not sure why it isn't working this time. I already have a db connection open to a local database. This function below is then used to make an additional connection to a separate, remote directory. The included config file simply contains the external database details (host, user, pass and name). I have checked and it is being included correctly. function connectDP() { global $dpConnection; include("secondary_db_config.php); $dpConnection = mysql_connect($dp_dbHost, $dp_dbUser, $dp_dbPass, true) or DIE("ERROR: Unable to connect to Deployment Platform"); mysql_select_db($dp_dbName, $dpConnection) or DIE("ERROR 006: Unable to select Deployment Platform Database"); } I then attempt to make this new connection simply by calling this function externally: connectDP(); But when loading the page (in 5.3.0), I get the message: ERROR: Unable to connect to Deployment Platform I'm using the optional new_link flag boolean as the fourth argument in the mysql_connect() function and it's still not working. I've been wracking my brain this morning trying to figure out why this connection doesn't work (while I've done something very similar elsewhere to a separate second database that does work). Any help would be appreciated. Thanks! Rich

    Read the article

  • Velocity CTP: can we 'search' for objects?

    - by Stato Machino
    It appears that 'tags' allow us to associate a 'search term' with the objects placed into the Velocity cache space. However, these can only be queried within a 'region'. Further, regions somehow limit the locality of objects in the cache to a single server (or maybe something kinda like that). So this appears to make it hard to perform any operation for which the unique Id of the cached item is not persisted or continuously available to the application that stores and retrieves objects to and from the cache. In any case, I can't see an easy way to 'cleanse' the cache of objects or to find objects across the entire cache that may share some prefix, postfix or infix values in the cache key so that i can clear out the cache of object repeatedly created in unit tests, for example. And I am unsure about the consequences of regions being associated with single server cache locations. So I would appreciate any help with the following questions: What is the difference between a 'distributed cache' (called a 'partitioned' cache??) when using regions, and a 'local cache'? 1.a. In particular, are the region-oriented values in a distributed cache visible through a cache factory that is configured to 'see' the entire cache space? Are the operations of creating and removing 'regions' efficient enough that it would be reasonable to create a region and a group of tags for each bundle of objects that need to be cached? 2.a. Or does this just push the problem of scoping the 'search for objects' up the chain because the ability of the DataCache object to query down through regions and tags as limited as querying for the cache keys of objects themselves. Thanks, Stato

    Read the article

  • How can you exclude a large number of records in a cross db query using LINQ2SQL?

    - by tap
    So here is my situation: I have a vendor supplied DB we cannot modify and a custom db that imports data from the vendor app and acts on it. Once records are imported form the vendor app, they cannot appear on the list of records to be imported. Also we only want to display the 250 most recent records that have not been imported. What I originally started with was select the list of ids that have been imported from the custom db, and then query the vendor db, using the list of ids in a .Where(x = !idList.Contains(x.Id)) clause on the remote query. This worked up until we broke 2100 records imported into the custom db, as 2100 is the limit on the number of parameters that can be passed into SQL. After finding out this was the actual problem and not the 'invalid buffer'/'severe error' ADO.Net reported, my solution was to remove the first 2000 ids in the remote query, and then remove the remaining records in the local query. Having to pull back a large number of irrelevant records, just to exclude them, so I can get the correct 250 records seems very inelegant. Is there a better way to do this, short of doing a cross db stored procedure? Thanks in advance.

    Read the article

  • subversion: how to manage tweaked files

    - by punk4funk
    Our group is considering moving to SVN. But, I can't seem to find a way to do the following: I need to make minor tweaks locally to about 20 files in the repository w/o having SVN consider them "changed" and included in the commit. (Changes like communication time-outs and logging levels.) Ideally I would want to merge the tweaked files to newer versions in the repository. (Keeping the tweaked local file up-to-date with committed changes form other users.) I can't imagine we're unique in wanting/needing this. Are there best practices around this type of use case? One thing I'm considering is putting all the tweaked files into a branched "tweaked" working copy. Then merging my tweaked files into my "official" working copy. Then using a script, which compares the "tweaked" and "official" working copies, to update my ignore list. The script would also un-ignore and alert me to any files that had tweaks and other changes that, presumably, needed to be committed to the repository. This seems kinda hacky and I can't imagine there's not a better way.

    Read the article

  • Read header data from files on remote server

    - by rejeep
    Hi! I'm working on a project right now where I need to read header data from files on remote servers. I'm talking about many and large files so I cant read whole files, but just the header data I need. The only solution I have is to mount the remote server with fuse and then read the header from the files as if they where on my local computer. I've tried it and it works. But it has some drawbacks. Specially with FTP: Really slow (FTP is compared to SSH with curlftpfs). From same server, with SSH 90 files was read in 18 seconds. And with FTP 10 files in 39 seconds. Not dependable. Sometimes the mountpoint will not be unmounted. If the server is active and a passive mounting is done. That mountpoint and the parent folder gets locked in about 3 minutes. Does timeout, even when there's data transfer going (guess this is the FTP-protocol and not curlftpfs). Fuse is a solution, but I don't like it very much because I don't feel that I can trust it. So my question is basically if there's any other solutions to the problem. Language is preferably Ruby, but any other will work if Ruby does not support the solution. Thanks!

    Read the article

  • Commons VFS and IBM MVS System

    - by Liming
    Hello All, I'm using Apache Commons VFS / SFTP, we are trying to download files from the IBM MVS system. The download part is all good, however, we can not open up the zipped files after downloading. Seems like the zip file was compressed using a different algorithm or something Anyone has any pointers? *Note, the same function works fine if we connect to a regular unix/linux SFTP server. Below is an example of what we did String defaultHost = "[my sftp ip address]"; String host = defaultHost; String defaultRemotePath = "//__root.dir1.dir2."; String remotePath = defaultRemotePath; String user = "test"; String password = "test"; String remoteFileName = "Blah.ZIP.BLAH"; log.info("FtpPojo() begin instantiation"); FileObject localFileObject = fsManager.resolveFile("C:/Work/Blah.ZIP.BLAH"); log.debug("local file name is :"+localFileObject.getName().getBaseName()); log.debug("FtpPojo() instantiated and fsManager created"); String uri = createSftpUri(host, user, password) + ":322"+remotePath+remoteFileName; remoteRepo = fsManager.resolveFile(uri, fsOptions); remoteRepo.copyFrom(localFileObject, Selectors.SELECT_ALL);

    Read the article

  • Pushing or serving real-time data to an excel spreadsheet

    - by evan_irl
    I am running some test automation on a networked computer resource (remote). The remote computer running the test automation generates some output, which I can customize however I wish - probably a text or excel file. I would like to create an excel spreadsheet which, from my local machine, monitors this output and provides real-time analytics. Later I would make the networked computer visible to more people, and they can use the same spreadsheet to monitor this output. My problem is that this networked computer is located on the other side of the earth, and so using any kind of polling in excel VBA to PULL the data from the networked computer results in a very long wait with the pinwheel spinning, making the sheet clumsy and less useful. The same thing happens when I use excel's built in function for linking to "external resources" Is there any way to PUSH data to the excel spreadsheet from the networked computer? Something that is easy to set up would be ideal, the latency does not have to be low, so long as there is no awkward "busy wait" while the sheet updates. If that is not possible, is there any way of using PULL from the excel sheet that avoids the same busy wait?

    Read the article

  • What type of objects can be sent back to an action Method using HTML.HIDDEN()

    - by Richard77
    Hello, 1)Let's say I've this form: <%Using(Html.BeginForm()){%> <% = Html.Hidden("myObject", (cast to the appropriate type)ViewData["KeyForMyObject"]%> <input type = "submit" "Submit Object"> <%}%> 2) Here's the Action which's supposed to intercept the value of the object public ActionResult MyAction(Type myObject) { //Do Something with the object } Here's my question: What type of objects the Hidden field can support? In fact, when ViewData["KeyForMyObject"] contains a string, int, or bool, myAction is able to retrieve the value. But, when it comes to objects, such as List, and dictionary, nothing happens. When I debug to check the local values, I see null for Type myObject in the MyMethod. So what are the rules in MVC when it comes to a List or Dictionary? ================================= EDIT To make things simpler, can I write something like this <% = Html.Hidden("contactDic", (Dictionary<string, string>) ViewData["contacts"])%> and expect to retrieve the dictionary in the action Method like this public ActionResult myMethod(Dictionary<string, string> contactDic) { //Do something with the dictionary } Thanks for Helping

    Read the article

  • Bash script not working on a new dedicated server

    - by Scott
    Recently I have migrated to the new dedicated server which is running on the same operating system - FreeBSD 8.2. I got a root account access and all permissions have been set properly. My problem is that, the bash script I was running on the old server doesn't works on the new machine, the only error appearing while running the script is: # sh script.sh script.sh: 3: Syntax error: word unexpected (expecting ")") Here is the code itself: #!/usr/local/bin/bash PORTS=(7777:GAME 11000:AUTH 12000:DB) MESSG="" for i in ${PORTS[@]} ; do PORT=${i%%:*} DESC=${i##*:} CHECK=`sockstat -4 -l | grep :$PORT | awk '{print $3}' | head -1` if [ "$CHECK" -gt 1 ]; then echo $DESC[$PORT] "is up ..." $CHECK else MESSG=$MESSG"$DESC[$PORT] wylaczony...\n" if [ "$DESC" == "AUTH" ]; then MESSG=$MESSG"AUTH is down...\n" fi if [ "$DESC" == "GAME" ]; then MESSG=$MESSG"GAME is down...\n" fi if [ "$DESC" == "DB" ]; then MESSG=$MESSG"DB is down...\n" fi fi done if [ -n "$MESSG" ]; then echo -e "Some problems ocurred:\n\n"$MESSG | mail -s "Problems" [email protected] fi I don't really code in bash, so I don't know why this happend...

    Read the article

  • Change post form data function into curl

    - by QLiu
    Hello Guys, In the old way in our website, when users clicks “logout” button. It runs a post form function; which will pass parameters (logout, sn) to external sites to execute “logout” function. Like: I do not want the users jump to the external site, therefore, i use curl to post data. (because we are in different domain, i guess Ajax request doesnot work ) Post the same data to execute logout function in external site. // create cURL resource $URL = "http://bswi.development.intra.local/"; //Initl curl $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, $URL); // Load in the destination URL curl_setopt($ch, CURLOPT_HTTPAUTH, CURLAUTH_BASIC); //Normal HTTP request, not SSL curl_setopt($ch, CURLOPT_POSTFIELDS, "logout=1"); // receive server response ... curl_setopt($ch, CURLOPT_RETURNTRANSFER, true); curl_exec ($ch); echo $content; curl_close ($ch); Do u think i am going in the right direction?

    Read the article

  • Odd Things of ASP.NET MVC Deployment on IIS 6

    - by misaxi
    Recently, I am a bit interested in the deployment of ASP.NET MVC application on IIS6 because Phil Haack posted an easier way to deploy ASP.NET MVC application on ASP.NET 4. So I decided to see how different version of ASP.NET MVC works on different version of ASP.NET. First off, I created an ASP.NET MVC 2 project in Visual Studio 2010 and deploy it to IIS 6 on Windows Server 2003 (only .NET framework 3.5 installed). I set the application to run in ASP.NET 2.0 and no extra stuff. Because I just wanted to see what sort of error would occur. And as expected, some error was reported as following. Then, I set the Copy Local attribute of System.Web.Mvc assembly to true as following and deploy again. As a result, the application ran smoothly. I had read tons of materials talked about the mess of deploying MVC application on IIS 6. And I did fight to tackle the deploying issues in my previous project. At least, if had used Extensionless Url in your application, you should have configured wildcard mapping in IIS. But in this case, I even didn’t have chance to do so. What the heck was going on exactly? Did I discover a new continent?

    Read the article

  • How can I set paperclip's storage mechanism based on the current Rails environment?

    - by John Reilly
    I have a rails application that has multiple models with paperclip attachments that are all uploaded to S3. This app also has a large test suite that is run quite often. The downside with this is that a ton of files are uploaded to our S3 account on every test run, making the test suite run slowly. It also slows down development a bit, and requires you to have an internet connection in order to work on the code. Is there a reasonable way to set the paperclip storage mechanism based on the Rails environment? Ideally, our test and development environments would use the local filesystem storage, and the production environment would use S3 storage. I'd also like to extract this logic into a shared module of some kind, since we have several models that will need this behavior. I'd like to avoid a solution like this inside of every model: ### We don't want to do this in our models... if Rails.env.production? has_attached_file :image, :styles => {...}, :storage => :s3, # ...etc... else has_attached_file :image, :styles => {...}, :storage => :filesystem, # ...etc... end Any advice or suggestions would be greatly appreciated! :-)

    Read the article

  • Perl: Value of response code in HTTP::Request

    - by lola
    Hi all, So, I am writing a code to get a document from the internet. The document size is around 200 KB. This is the code: !/usr/local/bin/perl -w use strict; use LWP::UserAgent; my $ua = LWP::UserAgent->new; my $url = "SOME URL"; my $req = HTTP::Request->new(GET => $url); my $res = $ua->request($req); if($res->is_success){ print $res->content ."\n"; } else{ print "Error: " . $res->status_line; } Now, the only problem is I can't mention what the URL is. However, the output is: "Error: 500 read timeout". When I checked the link externally, the data is being downloaded in under 5 seconds. I even changed the timeout to 1000s, but it still didn't work. How should I go about finding more information related to the response. The size of the file (around 200KB) is also not too great to warrant a read timeout. The server is also not a busy one, didn't give a problem whenever I checked the link on the browser. Thanks.

    Read the article

  • Symfony deploying issue

    - by medhad
    I have some problem while configuring symfony project on the production server. When I run the command doctrine --build --all --and-load it gives me error in the production environment: doctrine Dropping "doctrine" database PHP Notice: Undefined index: dbname in /var/www/sf_project/lib/vendor/symfony/lib/plugins/sfDoctrinePlugin/lib/vendor/doctrine/Doctrine/Connection.php on line 1472 Notice: Undefined index: dbname in /var/www/sf_project/lib/vendor/symfony/lib/plugins/sfDoctrinePlugin/lib/vendor/doctrine/Doctrine/Connection.php on line 1472 doctrine SQLSTATE[42000]: Syntax error or access violation: 1064 You have an erro...e right syntax to use near '' at line 1. Failing Query: "DROP DATABASE " doctrine Creating "dev" environment "doctrine" database PHP Notice: Undefined index: dbname in /var/www/sf_project/lib/vendor/symfony/lib/plugins/sfDoctrinePlugin/lib/vendor/doctrine/Doctrine/Connection.php on line 1439 However after the error it creates the table successfully. But if I run the command second times it fails partially while crating the tables. I have changed my database.yml configuration properly for the production environment. here it is: all: doctrine: class: sfDoctrineDatabase param: dsn: mysql:host=localhost;dbname=sf_project port: 3306 username: root password: mainserver Its working right in the local environment though. Can some one shed some light on it ?

    Read the article

  • Ant Junit tests are running much slower via ant than via IDE - what to look at?

    - by Alex B
    I am running my junit tests via ant and they are running substantially slower than via the IDE. My ant call is: <junit fork="yes" forkmode="once" printsummary="off"> <classpath refid="test.classpath"/> <formatter type="brief" usefile="false"/> <batchtest todir="${test.results.dir}/xml"> <formatter type="xml"/> <fileset dir="src" includes="**/*Test.java" /> </batchtest> </junit> The same test that runs in near instantaneously in my IDE (0.067s) takes 4.632s when run through Ant. In the past, I've been able to speed up test problems like this by using the junit fork parameter but this doesn't seem to be helping in this case. What properties or parameters can I look at to speed up these tests? More info: I am using the reported time from the IDE vs. the time that the junit task outputs. This is not the sum total time reported at the end of the ant run. So, bizarrely, this problem has resolved itself. What could have caused this problem? The system runs on a local disk so that is not the problem.

    Read the article

< Previous Page | 597 598 599 600 601 602 603 604 605 606 607 608  | Next Page >