Search Results

Search found 10693 results on 428 pages for 'max requests'.

Page 360/428 | < Previous Page | 356 357 358 359 360 361 362 363 364 365 366 367  | Next Page >

  • Maximum number of characters using keystrokes A, Ctrl+A, Ctrl+C and Ctrl+V

    - by munda
    This is an interview question from google. I am not able to solve it by myself. Can somebody shed some light? Write a program to print the sequence of keystrokes such that it generates the maximum number of character 'A's. You are allowed to use only 4 keys: A, Ctrl+A, Ctrl+C and Ctrl+V. Only N keystrokes are allowed. All Ctrl+ characters are considered as one keystroke, so Ctrl+A is one keystroke. For example, the sequence A, Ctrl+A, Ctrl+C, Ctrl+V generates two A's in 4 keystrokes. Ctrl+A is Select All Ctrl+C is Copy Ctrl+V is Paste I did some mathematics. For any N, using x numbers of A's , one Ctrl+A, one Ctrl+C and y Ctrl+V, we can generate max ((N-1)/2)2 number of A's. For some N M, it is better to use as many Ctrl+A's, Ctrl+C and Ctrl+V sequences as it doubles the number of A's. The sequence Ctrl+A, Ctrl+V, Ctrl+C will not overwrite the existing selection. It will append the copied selection to selected one.

    Read the article

  • Need a tool to search large structure text documents for words, phrases and related phrases

    - by pitosalas
    I have to keep up with structured documents containing things such as requests for proposals, government program reports, threat models and all kinds of things like that. They are in techno-legalese as I would call them: highly structured, with section numbering and 3, 4 and 5 levels of nesting. All in English I need a more efficient way to locate those paragraphs of nuggets that matter to me. So what I’d like is kind of a local document index/repository, that would allow me to have some standing queries and easily locate sections in documents that talk about my queries. Here’s an example: I’d like to load in 10 large PDF files, each of say 100 pages. Each PDF contains English text, formatted very nicely into paragraphs and sections. I’d like to specify that I am interested in “blogging platforms”, “weaknesses in Ruby”, “localization and internationalization” Ideally then look at a list that showed the section of text, the name of the document, and other information that seemed to be related to and/or include the words and phrases I specified. I am sure something like this exists. I would call it something like document indexing, document comprehension or structured searching.

    Read the article

  • AJAX Partial Rendering issues for the default page in IIS 7 when using custom http module

    - by WiseGuyEh
    The problem When I try to make a AJAX partial update request (using the UpdatePanel control) from the default page of an IIS7 web site, it fails- instead of returning the html to be updated, it returns the entire page, which then causes the MS AJAX Javascript to throw a parsing shit-fit. The suspected cause I have narrowed the cause down to two issues- making an AJAX request to the default page when I have a certain custom http module registered. A partial rendering request to http://localhost will fail, but a partial rendering request to http://localhost/default.aspx will work fine. Also, If i remove the following line in my custom HttpModule: _application.PreRequestHandlerExecute += OnPreRequestHandlerExecute; The AJAX partial render will work correctly. Wierd huh? Another wierd thing... If I look at trace.axd, I can see that when a partial rendering request fails, two POST requests are logged for the one partial rendering request- one where the default.aspx page executes successfully (trace information such as page_load is logged) but no content is produced and a second that doesn't seem to actually execute (no trace information is logged) but produces content (HTTP_CONTENT_LENGTH is greater than 0). Please help! If anyone with a good knowledge of HTTP modules or the MS AJAX Http module could explain why this is occuring I would be very grateful. As it is, the obvious work arround is to just redirect to default.aspx if the request url is "/" but I would really like to understand why this is occurring.

    Read the article

  • How to use a loop to download HTML with paging?

    - by Nai
    I want to loop through this URL and download the HTML. https://www.googleapis.com/customsearch/v1?key=AIzaSyAAoPQprb6aAV-AfuVjoCdErKTiJHn-4uI&cx=017576662512468239146:omuauf_lfve&q=" + searchTermFormat + "&num=10" +"&start=" + i start and num controls the paging of the URL. So if &start=2, and &num=10, it will scrape 10 results from page 2. Given that Google has a max limit of num = 10, how can I write a loop that loops through the HTML and scrape the results for the first 10 pages? This is what I have so far which just scrapes the first page. //input search term Console.WriteLine("What is your search query?:"); string searchTerm = Console.ReadLine(); //concantenate the strings using + symbol to make it URL friendly for google string searchTermFormat = searchTerm.Replace(" ", "+"); //create a new instance of Webclient and use DownloadString method from the Webclient class to extract download html WebClient client = new WebClient(); int i = 1; string Json = client.DownloadString("https://www.googleapis.com/customsearch/v1?key=AIzaSyAAoPQprb6aAV-AfuVjoCdErKTiJHn-4uI&cx=017576662512468239146:omuauf_lfve&q=" + searchTermFormat + "&num=10" + "&start=" + i); //create a new instance of JavaScriptSerializer and deserialise the desired content JavaScriptSerializer js = new JavaScriptSerializer(); GoogleSearchResults results = js.Deserialize<GoogleSearchResults>(Json); //output results to console Console.WriteLine(js.Serialize(results)); Console.ReadLine();

    Read the article

  • ByteFlow installation Error on Windows

    - by Patrick
    Hi Folks, When I try to install ByteFlow on my Windows development machine, I got the following MySQL error, and I don't know what to do, please give me some suggestion. Thank you so much!!! E:\byteflow-5b6d964917b5>manage.py syncdb !!! Read about DEBUG in settings_local.py and then remove me !!! !!! Read about DEBUG in settings_local.py and then remove me !!! J:\Program Files\Python26\lib\site-packages\MySQLdb\converters.py:37: DeprecationWarning: the sets module is deprecated from sets import BaseSet, Set Creating table auth_permission Creating table auth_group Creating table auth_user Creating table auth_message Creating table django_content_type Creating table django_session Creating table django_site Creating table django_admin_log Creating table django_flatpage Creating table actionrecord Creating table blog_post Traceback (most recent call last): File "E:\byteflow-5b6d964917b5\manage.py", line 11, in <module> execute_manager(settings) File "J:\Program Files\Python26\lib\site-packages\django\core\management\__init__.py", line 362, in execute_manager utility.execute() File "J:\Program Files\Python26\lib\site-packages\django\core\management\__init__.py", line 303, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "J:\Program Files\Python26\lib\site-packages\django\core\management\base.py", line 195, in run_from_argv self.execute(*args, **options.__dict__) File "J:\Program Files\Python26\lib\site-packages\django\core\management\base.py", line 222, in execute output = self.handle(*args, **options) File "J:\Program Files\Python26\lib\site-packages\django\core\management\base.py", line 351, in handle return self.handle_noargs(**options) File "J:\Program Files\Python26\lib\site-packages\django\core\management\commands\syncdb.py", line 78, in handle_noargs cursor.execute(statement) File "J:\Program Files\Python26\lib\site-packages\django\db\backends\util.py", line 19, in execute return self.cursor.execute(sql, params) File "J:\Program Files\Python26\lib\site-packages\django\db\backends\mysql\base.py", line 84, in execute return self.cursor.execute(query, args) File "J:\Program Files\Python26\lib\site-packages\MySQLdb\cursors.py", line 166, in execute self.errorhandler(self, exc, value) File "J:\Program Files\Python26\lib\site-packages\MySQLdb\connections.py", line 35, in defaulterrorhandler raise errorclass, errorvalue _mysql_exceptions.OperationalError: (1071, 'Specified key was too long; max key length is 767 bytes')

    Read the article

  • RPC for java/python with rest support, HTML monitoring and goodies

    - by Ran
    Here's my set of requirements: I'm looking for an RPC framework such as thrift, avro, protobuf (when adding services to it) which supports: Easy and intuitive IDL. No serial numbers, no manual versioning, simple... avro is a good example for this. Works with Java and Python Supports both fast binary prorocol, as well as HTTP based restful style. I'd like to be able to use it for both backend-to-backend communication (java-java or python-java) as well as frontend-to-backend communication (javascript to java). The rest support needs to include &param=value input as get/post requests (configurable per request) and output in three possible formats: json, jsonp, XML. Compact, fast, backward compatible, easy to upgrade etc... Provides some nice monitoring interfaces such as: JMX, web page status reports (e.g. packets in, packets out, error rate etc) Ops friendly... no need to take the whole site down to release new versions Both sync and asyc communication ... other goodies are welcome... Is there something out there? So far I've looked at thrift and avro and they are both nice in some ways, but don't check all my list. Thanks

    Read the article

  • "Session is Closed!" - NHibernate

    - by Alexis Abril
    This is in a web application environment: An initial request is able to successfully complete, however any additional requests return a "Session is Closed" response from the NHibernate framework. I'm using a HttpModule approach with the following code: public class MyHttpModule : IHttpModule { public void Init(HttpApplication context) { context.EndRequest += ApplicationEndRequest; context.BeginRequest += ApplicationBeginRequest; } public void ApplicationBeginRequest(object sender, EventArgs e) { CurrentSessionContext.Bind(SessionFactory.Instance.OpenSession()); } public void ApplicationEndRequest(object sender, EventArgs e) { ISession currentSession = CurrentSessionContext.Unbind( SessionFactory.Instance); currentSession.Dispose(); } public void Dispose() { } } SessionFactory.Instance is my singleton implementation, using FluentNHibernate to return an ISessionFactory object. In my repository class, I attempt to use the following syntax: public class MyObjectRepository : IMyObjectRepository { public MyObject GetByID(int id) { using (ISession session = SessionFactory.Instance.GetCurrentSession()) return session.Get<MyObject>(id); } } This allows code in the application to be called as such: IMyObjectRepository repo = new MyObjectRepository(); MyObject obj = repo.GetByID(1); I have a suspicion my repository code is to blame, but I'm not 100% sure on the actual implementation I should be using. I found a similar issue on SO here. I too am using WebSessionContext in my implementation, however, no solution was provided other than writing a custom SessionManager. For simple CRUD operations, is a custom session provider required apart from the built in tools(ie WebSessionContext)?

    Read the article

  • How to avoid overlapping date ranges when using a grouping clause?

    - by k rey
    I have a situation where I need to find time spans between value changes. I tried a simple group by clause but it eliminates overlapping changes. Consider the following example: create table #items ( code varchar(4) , class varchar(4) , txdate datetime ) insert into #items (code, class, txdate) values ('A', 'C', '2010-01-01'); insert into #items (code, class, txdate) values ('A', 'C', '2010-01-02'); insert into #items (code, class, txdate) values ('A', 'C', '2010-01-03'); insert into #items (code, class, txdate) values ('A', 'D', '2010-01-04'); insert into #items (code, class, txdate) values ('A', 'D', '2010-01-05'); insert into #items (code, class, txdate) values ('A', 'C', '2010-01-06'); insert into #items (code, class, txdate) values ('A', 'C', '2010-01-07'); insert into #items (code, class, txdate) values ('A', 'D', '2010-01-08'); insert into #items (code, class, txdate) values ('A', 'D', '2010-01-09'); select code , class , min(txdate) mindate , max(txdate) maxdate from #items group by code, class This returns the following results (notice the overlapping date ranges): |code|class|mindate |maxdate | ---------------------------------- |A |C |2010-01-01|2010-01-07| |A |D |2010-01-04|2010-01-09| I would like to have the query return the following: |code|class|mindate |maxdate | ---------------------------------- |A |C |2010-01-01|2010-01-03| |A |D |2010-01-04|2010-01-05| |A |C |2010-01-06|2010-01-07| |A |D |2010-01-08|2010-01-09| Any ideas and suggestions?

    Read the article

  • Stretch Background Image & Resize With Browser Window

    - by user241673
    I am trying to replicate the image resizing found at http://devkick.com/lab/fsgallery/ but with the code I have below, it is not working properly. When resizing the browser window to have small width and big height, white space shows up at the bottom of the page. feel free to see it & edit at http://jsbin.com/ifolu3 The CSS: html, body {width:100%; height:100%; overflow:hidden;} div.bg {position:absolute; width:200%; height:200%; top:-50%; left:-50%;} img.bg {min-height:50%; min-width:50%; margin:0 auto; display:block;} The JS/jQuery: $(window).resize(function(){ var ratio = Math.max($(window).width()/$('img.bg').width(),$(window).height()/$('img.bg').height()); if ($(window).width() $(window).height()) { $('img.bg').css({width:image.width()*ratio,height:'auto'}); } else { $('img.bg').css({width:'auto',height:image.height()*ratio}); } }); The HTML - (sorry for the formatting, had trouble getting "<" to show) [body] [div class="bg"] [img class="bg" src="bg.jpg" /] [/div] [/body]

    Read the article

  • making binned boxplot in matplotlib with numpy and scipy in Python

    - by user248237
    I have a 2-d array containing pairs of values and I'd like to make a boxplot of the y-values by different bins of the x-values. I.e. if the array is: my_array = array([[1, 40.5], [4.5, 60], ...]]) then I'd like to bin my_array[:, 0] and then for each of the bins, produce a boxplot of the corresponding my_array[:, 1] values that fall into each box. So in the end I want the plot to contain number of bins-many box plots. I tried the following: min_x = min(my_array[:, 0]) max_x = max(my_array[:, 1]) num_bins = 3 bins = linspace(min_x, max_x, num_bins) elts_to_bins = digitize(my_array[:, 0], bins) However, this gives me values in elts_to_bins that range from 1 to 3. I thought I should get 0-based indices for the bins, and I only wanted 3 bins. I'm assuming this is due to some trickyness with how bins are represented in linspace vs. digitize. What is the easiest way to achieve this? I want num_bins-many equally spaced bins, with the first bin containing the lower half of the data and the upper bin containing the upper half... i.e., I want each data point to fall into some bin, so that I can make a boxplot. thanks.

    Read the article

  • How to make a request from an android app that can enter a Spring Security secured webservice method

    - by johnrock
    I have a Spring Security (form based authentication) web app running CXF JAX-RS webservices and I am trying to connect to this webservice from an Android app that can be authenticated on a per user basis. Currently, when I add an @Secured annotation to my webservice method all requests to this method are denied. I have tried to pass in credentials of a valid user/password (that currently exists in the Spring Security based web app and can log in to the web app successfully) from the android call but the request still fails to enter this method when the @Secured annotation is present. The SecurityContext parameter returns null when calling getUserPrincipal(). How can I make a request from an android app that can enter a Spring Security secured webservice method? Here is the code I am working with at the moment: Android call: httpclient.getCredentialsProvider().setCredentials( //new AuthScope("192.168.1.101", 80), new AuthScope(null, -1), new UsernamePasswordCredentials("joeuser", "mypassword")); String userAgent = "Android/" + getVersion(); HttpGet httpget = new HttpGet(MY_URI); httpget.setHeader("User-Agent", userAgent); httpget.setHeader("Content-Type", "application/xml"); HttpResponse response; try { response = httpclient.execute(httpget); HttpEntity entity = response.getEntity(); ... parse xml Webservice Method: @GET @Path("/payload") @Produces("application/XML") @Secured({"ROLE_USER","ROLE_ADMIN","ROLE_GUEST"}) public Response makePayload(@Context Request request, @Context SecurityContext securityContext){ Payload payload = new Payload(); payload.setUsersOnline(new Long(200)); if (payload == null) { return Response.noContent().build(); } else{ return Response.ok().entity(payload).build(); } }

    Read the article

  • HDFS some datanodes of cluster are suddenly disconnected while reducers are running

    - by user1429825
    I have 8 slave computers and 1 master computer for running Hadoop (ver 0.21) some datanodes of cluster are suddenly disconnected while I was running MapReduce code on 10GB data After all mappers finished and around 80% of reducers was processed, randomly one or more datanode disconned from network. and then the other datanodes start to disappear from network even if I killed the MapReduce job when I found some datanode was disconnected. I've tried to change dfs.datanode.max.xcievers to 4096, turned off fire-walls of all computing node, disabled selinux and increased the number of file open limit to 20000 but they didn't work at all... anyone have a idea to solve this problem? followings are error log from mapreduce 12/06/01 12:31:29 INFO mapreduce.Job: Task Id : attempt_201206011227_0001_r_000006_0, Status : FAILED java.io.IOException: Bad connect ack with firstBadLink as ***.***.***.148:20010 at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:889) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:820) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:427) and followings are logs from datanode 2012-06-01 13:01:01,118 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block blk_-5549263231281364844_3453 src: /*.*.*.147:56205 dest: /*.*.*.142:20010 2012-06-01 13:01:01,136 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(*.*.*.142:20010, storageID=DS-1534489105-*.*.*.142-20010-1337757934836, infoPort=20075, ipcPort=20020) Starting thread to transfer block blk_-3849519151985279385_5906 to *.*.*.147:20010 2012-06-01 13:01:19,135 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(*.*.*.142:20010, storageID=DS-1534489105-*.*.*.142-20010-1337757934836, infoPort=20075, ipcPort=20020):Failed to transfer blk_-5797481564121417802_3453 to *.*.*.146:20010 got java.net.ConnectException: > Connection timed out at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:701) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:373) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:1257) at java.lang.Thread.run(Thread.java:722) 2012-06-01 13:06:20,342 INFO org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Verification succeeded for blk_6674438989226364081_3453 2012-06-01 13:09:01,781 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(*.*.*.142:20010, storageID=DS-1534489105-*.*.*.142-20010-1337757934836, infoPort=20075, ipcPort=20020):Failed to transfer blk_-3849519151985279385_5906 to *.*.*.147:20010 got java.net.SocketTimeoutException: 480000 millis timeout while waiting for channel to be ready for write. ch : java.nio.channels.SocketChannel[connected local=/*.*.*.142:60057 remote=/*.*.*.147:20010] at org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:246) at org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:164) at org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:203) at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendChunks(BlockSender.java:388) at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:476) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:1284) at java.lang.Thread.run(Thread.java:722) hdfs-site.xml <configuration> <property> <name>dfs.name.dir</name> <value>/home/hadoop/data/name</value> </property> <property> <name>dfs.data.dir</name> <value>/home/hadoop/data/hdfs1,/home/hadoop/data/hdfs2,/home/hadoop/data/hdfs3,/home/hadoop/data/hdfs4,/home/hadoop/data/hdfs5</value> </property> <property> <name>dfs.replication</name> <value>3</value> </property> <property> <name>dfs.datanode.max.xcievers</name> <value>4096</value> </property> <property> <name>dfs.http.address</name> <value>0.0.0.0:20070</value> <description>50070 The address and the base port where the dfs namenode web ui will listen on. If the port is 0 then the server will start on a free port. </description> </property> <property> <name>dfs.datanode.http.address</name> <value>0.0.0.0:20075</value> <description>50075 The datanode http server address and port. If the port is 0 then the server will start on a free port. </description> </property> <property> <name>dfs.secondary.http.address</name> <value>0.0.0.0:20090</value> <description>50090 The secondary namenode http server address and port. If the port is 0 then the server will start on a free port. </description> </property> <property> <name>dfs.datanode.address</name> <value>0.0.0.0:20010</value> <description>50010 The address where the datanode server will listen to. If the port is 0 then the server will start on a free port. </description> <property> <name>dfs.datanode.ipc.address</name> <value>0.0.0.0:20020</value> <description>50020 The datanode ipc server address and port. If the port is 0 then the server will start on a free port. </description> </property> <property> <name>dfs.datanode.https.address</name> <value>0.0.0.0:20475</value> </property> <property> <name>dfs.https.address</name> <value>0.0.0.0:20470</value> </property> </configuration> mapred-site.xml <configuration> <property> <name>mapred.job.tracker</name> <value>masternode:29001</value> </property> <property> <name>mapred.system.dir</name> <value>/home/hadoop/data/mapreduce/system</value> </property> <property> <name>mapred.local.dir</name> <value>/home/hadoop/data/mapreduce/local</value> </property> <property> <name>mapred.map.tasks</name> <value>32</value> <description> default number of map tasks per job.</description> </property> <property> <name>mapred.tasktracker.map.tasks.maximum</name> <value>4</value> </property> <property> <name>mapred.reduce.tasks</name> <value>8</value> <description> default number of reduce tasks per job.</description> </property> <property> <name>mapred.map.child.java.opts</name> <value>-Xmx2048M</value> </property> <property> <name>io.sort.mb</name> <value>500</value> </property> <property> <name>mapred.task.timeout</name> <value>1800000</value> <!-- 30 minutes --> </property> <property> <name>mapred.job.tracker.http.address</name> <value>0.0.0.0:20030</value> <description> 50030 The job tracker http server address and port the server will listen on. If the port is 0 then the server will start on a free port. </description> </property> <property> <name>mapred.task.tracker.http.address</name> <value>0.0.0.0:20060</value> <description> 50060 </property> </configuration>

    Read the article

  • Should I write more SQL to be more efficient, or less SQL to be less buggy?

    - by RenderIn
    I've been writing a lot of one-off SQL queries to return exactly what a certain page needs and no more. I could reuse existing queries and issue a number of SQL requests linear to the number of records on the page. As an example, I have a query to return People and a query to return Job Details for a person. To return a list of people with their job details I could query once for people and then once for each person to retrieve their job details. I've found that in most cases that solution returns things in a reasonable amount of time, but I don't know how well it will scale in my environment. Instead I've been writing queries to join people + job details, or people + salary history, etc. I'm looking at my models and I see how I could shave off maybe 30% of my code if I were to re-use existing queries. This is a big temptation. Is it a bad thing to go for reuse over efficiency in general or does it all come down to the specific situation? Should I first do it the easy way and then optimize later, or is it best to get the code knocked out while everything is fresh in my mind? Thoughts, experiences?

    Read the article

  • Why does adding Crossover to my Genetic Algorithm give me worse results?

    - by MahlerFive
    I have implemented a Genetic Algorithm to solve the Traveling Salesman Problem (TSP). When I use only mutation, I find better solutions than when I add in crossover. I know that normal crossover methods do not work for TSP, so I implemented both the Ordered Crossover and the PMX Crossover methods, and both suffer from bad results. Here are the other parameters I'm using: Mutation: Single Swap Mutation or Inverted Subsequence Mutation (as described by Tiendil here) with mutation rates tested between 1% and 25%. Selection: Roulette Wheel Selection Fitness function: 1 / distance of tour Population size: Tested 100, 200, 500, I also run the GA 5 times so that I have a variety of starting populations. Stop Condition: 2500 generations With the same dataset of 26 points, I usually get results of about 500-600 distance using purely mutation with high mutation rates. When adding crossover my results are usually in the 800 distance range. The other confusing thing is that I have also implemented a very simple Hill-Climbing algorithm to solve the problem and when I run that 1000 times (faster than running the GA 5 times) I get results around 410-450 distance, and I would expect to get better results using a GA. Any ideas as to why my GA performing worse when I add crossover? And why is it performing much worse than a simple Hill-Climb algorithm which should get stuck on local maxima as it has no way of exploring once it finds a local max?

    Read the article

  • Python subprocess: 64 bit windows server PIPE doesn't exist :(

    - by Spaceman1861
    I have a GUI that launches selected python scripts and runs it in cmd next to the gui window. I am able to get my launcher to work on my (windows xp 32 bit) laptop but when I upload it to the server(64bit windows iss7) I am running into some issues. The script runs, to my knowledge but spits back no information into the cmd window. My script is a bit of a Frankenstein that I have hacked and slashed together to get it to work I am fairly certain that this is a very bad example of the subprocess module. Just wondering if i could get a hand :). My question is how do i have to alter my code to work on a 64bit windows server. :) from Tkinter import * import pickle,subprocess,errno,time,sys,os PIPE = subprocess.PIPE if subprocess.mswindows: from win32file import ReadFile, WriteFile from win32pipe import PeekNamedPipe import msvcrt else: import select import fcntl def recv_some(p, t=.1, e=1, tr=5, stderr=0): if tr < 1: tr = 1 x = time.time()+t y = [] r = '' pr = p.recv if stderr: pr = p.recv_err while time.time() < x or r: r = pr() if r is None: if e: raise Exception(message) else: break elif r: y.append(r) else: time.sleep(max((x-time.time())/tr, 0)) return ''.join(y) def send_all(p, data): while len(data): sent = p.send(data) if sent is None: raise Exception(message) data = buffer(data, sent) The code above isn't mine def Run(): print filebox.get(0) location = filebox.get(0) location = location.__str__().replace(listbox.get(ANCHOR).__str__(),"") theTime = time.asctime(time.localtime(time.time())) lastbox.delete(0, END) lastbox.insert(END,theTime) for line in CookieCont: if listbox.get(ANCHOR) in line and len(line) > 4: line[4] = theTime else: "Fill In the rip Details to record the time" if __name__ == '__main__': if sys.platform == 'win32' or sys.platform == 'win64': shell, commands, tail = ('cmd', ('cd "'+location+'"',listbox.get(ANCHOR).__str__()), '\r\n') else: return "Please use contact admin" a = Popen(shell, stdin=PIPE, stdout=PIPE) print recv_some(a) for cmd in commands: send_all(a, cmd + tail) print recv_some(a) send_all(a, 'exit' + tail) print recv_some(a, e=0) The Code above is mine :)

    Read the article

  • Ajax heavy JS apps using excessive amounts of memory over time.

    - by Shane Reustle
    I seem to have some pretty large memory leaks in an app that I am working on. The app itself is not very complex. Every 15 seconds, the page requests approx 40kb of JSON from the server, and draws a table on the page using it. It is cheaper to draw the table over because the data is usually always new. I am attaching a few events to the table, approx 5 per line, 30 lines in the table. I used jQuery's .html() method to put the new html into the container and overwrite the existing. I do this specifically so that jQuery's special cleanup functions go in and attempt to detach all events on the elements in the element that it is overwriting. I then also delete the large variables of html once they are sent to the DOM using delete my_var. I have checked for circular references and attached events that are never cleared a few times, but never REALLY dug into it. I was wondering if someone could give me a few pointers on how to optimize a very heavy app like this. I just picked up "High Performance Javascript" by Nicholas Zakas, but didn't have much time to get into it yet. To give an idea on how much memory this is using, after 4~ hours, it is using about 420,000k on chrome, and much more on Firefox or IE. Thanks!

    Read the article

  • POST xml to php with apache2

    - by Berry
    I'm working on an application that receives XML data via POST, processes it with a PHP script, and returns an XML response. I'm getting the XML with this PHP code: $requestStr = file_get_contents('php://input'); $requests = simplexml_load_string($requestStr); which works fine on the Linux-based product hardware using nginx as the server. However, for testing I'd like to be able to run it on my MacBook Pro, so I can avoid the "build image, install on product, reboot product, wait, test change" loop while I do targeted development on this XML processor. I enabled "web sharing" which starts up Apache, added a rewrite rule to point a convenient URI at my development source directory and used curl to send a request to my PHP script thus: curl -H "Content-Type:text/xml" -d @request.xml http://localhost/test/path/testscript "testscript" is handled by the PHP script fine, but when it goes to read "php:://input" I get nothing -- the empty string. Anyone have a clue why this would work under Linux with nginx and not under MacOS with Apache? I've googled and searched stackoverlow.com to no avail. Thanks for any answers. UPDATE: I've discovered that at least in this configuration, reading from php://stdin will work fine, while php://input will not. Who knew?

    Read the article

  • Any info about book "Unix Internals: The New Frontiers" by Uresh Vahalia 2nd edition (Jan 2010)

    - by claws
    This summer I'm getting into UNIX (mostly *BSD) development. I've graduate level knowledge about operating systems. I can also understand the code & read from here and there but the thing is I want to make most of my time. Reading books are best for this. From my search I found that these two books The Design and Implementation of the 4.4 BSD Operating System "Unix Internals: The New Frontiers" by Uresh Vahalia are like established books on UNIX OS internals. But the thing is these books are pretty much outdated. yay!! Lucky me. "Unix Internals: The New Frontiers" by Uresh Vahalia 2 edition (Jan 2010) is released. I've been search for information on this book. Sadly, Amazon says "Out of Print--Limited Availability" & I couldn't find any info regarding this book. This is the information I'm looking for: Table of Contents Whats new in this edition? Where the hell can I buy soft-copy of this book? I really cannot afford buying a hardcopy. How can I contact the author? I've lot of hopes & expectations on this book. I've been waiting for its release for a long time. I've sent random mails to & & requesting to have a proper website for this book. I even contacted publisher for any further information but no replies from any one. If you have any other books that you think will help me. I again repeat, I want to get max possible out of these 2.5 months summer.

    Read the article

  • accessing a value of a nested hash

    - by st
    Hello! I am new to perl and I have a problem that's very simple but I cannot find the answer when consulting my perl book. When printing the result of Dumper($request); I get the following result: $VAR1 = bless( { '_protocol' => 'HTTP/1.1', '_content' => '', '_uri' => bless( do{\(my $o = 'http://myawesomeserver.org:8081/counter/')}, 'URI::http' ), '_headers' => bless( { 'user-agent' => 'Mozilla/5.0 (X11; U; Linux i686; en; rv:1.9.0.4) Gecko/20080528 Epiphany/2.22 Firefox/3.0', 'connection' => 'keep-alive', 'cache-control' => 'max-age=0', 'keep-alive' => '300', 'accept' => 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8', 'accept-language' => 'en-us,en;q=0.5', 'accept-encoding' => 'gzip,deflate', 'host' => 'localhost:8081', 'accept-charset' => 'ISO-8859-1,utf-8;q=0.7,*;q=0.7' }, 'HTTP::Headers' ), '_method' => 'GET', '_handle' => bless( \*Symbol::GEN0, 'FileHandle' ) }, 'HTTP::Server::Simple::Dispatched::Request' ); How can I access the values of '_method' ('GET') or of 'host' ('localhost:8081'). I know that's an easy question, but perl is somewhat cryptic at the beginning. Thank you, St.

    Read the article

  • Acceessing some aggregate functions in a linq datasource in a GridView

    - by Stephen Pellicer
    I am working on a traditional WebForms project. In the project I am trying out some Linq datasources with plans to eventually migrate to an MVC architecture. I am still very new to Linq. I have a GridView using a Linq datasource. The entities I am showing has a to many relationship and I would like to get the maximum value of a column in the many side of the relationship. I can show properties of the base entity in the gridview: <asp:TemplateField HeaderText="Number" SortExpression="tJobBase.tJob.JobNumber"> <ItemTemplate> <asp:Label ID="Label1" runat="server" Text='<%# Bind("tJobBase.tJob.JobNumber") %>'> </asp:Label> </ItemTemplate> </asp:TemplateField> I can also show the count of the many related property: <asp:TemplateField HeaderText="Number" SortExpression="tJobBase.tJob.tHourlies.Count"> <ItemTemplate> <asp:Label ID="Label1" runat="server" Text='<%# Bind("tJobBase.tJob.tHourlies.Count") %>'> </asp:Label> </ItemTemplate> </asp:TemplateField> Is there a way to get the max value of a column called WeekEnding in the tHourlies collection to show in the GridView?

    Read the article

  • How can I properly handle 404s in ASP.NET MVC?

    - by Brian
    I am just getting started on ASP.NET MVC so bear with me. I've searched around this site and various others and have seen a few implementations of this. EDIT: I forgot to mention I am using RC2 Using URL Routing: routes.MapRoute( "Error", "{*url}", new { controller = "Errors", action = "NotFound" } //404s ); The above seems to take care of requests like this (assuming default route tables setup by initial MVC project): "/blah/blah/blah/blah" Overriding HandleUnknownAction() in the controller itself: //404s - handle here (bad action requested protected override void HandleUnknownAction(string actionName) { ViewData["actionName"] = actionName; View("NotFound").ExecuteResult(this.ControllerContext); } However the previous strategies do not handle a request to a Bad/Unknown controller. For example, I do not have a "/IDoNotExist", if I request this I get the generic 404 page from the web server and not my 404 if I use routing + override. So finally, my question is: Is there any way to catch this type of request using a route or something else in the MVC framework itself? OR should I just default to using Web.Config customErrors as my 404 handler and forget all this? I assume if I go with customErrors I'll have to store the generic 404 page outside of /Views due to the Web.Config restrictions on direct access. Anyway any best practices or guidance is appreciated.

    Read the article

  • Rails3 renders a js.erb template with a text/html content-type instead of text/javascript

    - by Yannis
    Hi, I'm building a new app with 3.0.0.beta3. I simply try to render a js.erb template to an Ajax request for the following action (in publications_controller.rb): def get_pubmed_data entry = Bio::PubMed.query(params[:pmid])# searches PubMed and get entry @publication = Bio::MEDLINE.new(entry) # creates Bio::MEDLINE object from entry text flash[:warning] = "No publication found."if @publication.title.blank? and @publication.authors.blank? and @publication.journal.blank? respond_to do |format| format.js end end Currently, my get_pubmed_data.js.erb template is simply alert('<%= @publication.title %>') The server is responding with the following alert('Evidence for a herpes simplex virus-specific factor controlling the transcription of deoxypyrimidine kinase.') which is perfectly fine except that nothing happen in the browser, probably because the content-type of the response is 'text/html' instead of 'text/javascript' as shown by the response header partially reproduced here: Status 200 Keep-Alive timeout=5, max=100 Connection Keep-Alive Transfer-Encoding chunked Content-Type text/html; charset=utf-8 Is this a bug or am I missing something? Thanks for your help!

    Read the article

  • Stored procedure woes ... inserting binary ...

    - by Wardy
    Ok so I have this storedproc in my SQL 2008 database (works in 2005 too / used to) ... CREATE PROCEDURE [dbo].[SetBinaryContent] @Ref nvarchar(50), @Content varbinary(MAX), @ObjectID uniqueidentifier AS BEGIN DELETE ObjectContent WHERE ObjectId = @ObjectID AND Ref = @Ref IF DATALENGTH(@Content) > 5 BEGIN INSERT INTO ObjectContent (Ref,BinaryContent,ObjectId) VALUES (@Ref,@Content,@ObjectId) END UPDATE Objects SET [Status] = 1 WHERE ID = @ObjectID END Relatively simple, I take a byte array in C# and chuck it in @Content i then give it a guid and string for the other params and off we go. ... Great, it used to work ... but it don't anymore ... so erm ... What's wrong with this stored proc? I've stepped through my C# code thinking I screwed up somehow in that but it definately adds the params and gives them the correct values so what would cause the server to just stop executing this storedproc correctly? When called this proc executes but nothing changes in the db ... no new records are added to the ObjectContent table. Weird huh ...

    Read the article

  • Android : Handle OAuth callback using intent-filter

    - by Dave Allison
    I am building an Android application that requires OAuth. I have all the OAuth functionality working except for handling the callback from Yahoo. I have the following in my AndroidManifest.xml : <intent-filter> <action android:name="android.intent.action.VIEW"></action> <category android:name="android.intent.category.DEFAULT"></category> <category android:name="android.intent.category.BROWSABLE"></category> <data android:host="www.test.com" android:scheme="http"></data> </intent-filter> Where www.test.com will be substituted with a domain that I own. It seems : This filter is triggered when I click on a link on a page. It is not triggered on the redirect by Yahoo, the browser opens the website at www.test.com It is not triggered when I enter the domain name directly in the browser. So can anybody help me with When exactly this intent-filter will be triggered? Any changes to the intent-filter or permissions that will widen the filter to apply to redirect requests? Any other approaches I could use? Thanks for your help.

    Read the article

  • Spring URL mapping question

    - by es11
    I am using Java with Spring framework. Given the following url: www.mydomain.com/contentitem/234 I need to map all requests that come to /contentitem/{numeric value} mapped to a given controller with the "numeric value" passed as a parameter to the controller. Right now in my servlet container xml I have simple mappings similar to the following: ... <entry key="/index.html"> <ref bean="homeController" /> </entry> ... I am just wondering what I need to add to the mapping in order to achieve what I described? Edit: I unaccepted the answer temporarily because I can't seem to figure out how to do the mapping in my web.xml (I am using annotations as described in axtavt's answer below). How do I add a proper <url-pattern>..</url-pattern> in my <servlet-mapping> so that the request for "/contentitem/{numeric_value}" gets properly picked up? Thanks!

    Read the article

< Previous Page | 356 357 358 359 360 361 362 363 364 365 366 367  | Next Page >