Search Results

Search found 10693 results on 428 pages for 'max requests'.

Page 360/428 | < Previous Page | 356 357 358 359 360 361 362 363 364 365 366 367  | Next Page >

  • C# TCP socket with session

    - by Zé Carlos
    Is there any way of dealing with sessions with sockets in C#? Example of my problem: I have a server with a socket listening on port 5672. TcpListener socket = new TcpListener(localAddr, 5672); socket.Start(); Console.Write("Waiting for a connection... "); // Perform a blocking call to accept requests. TcpClient client = socket.AcceptTcpClient(); Console.WriteLine("Connected to client!"); And i have two clients that will send one byte. Client A send 0x1 and client B send 0x2. From the server side, i read this data like this: Byte[] bytes = new Byte[256]; String data = null; NetworkStream stream = client.GetStream(); while ((stream.Read(bytes, 0, bytes.Length)) != 0) { byte[] answer = new ... stream.Write(answer , 0, answer.Length); } Then client A sends 0x11. I need a way to know that this client is the same that sent "0x1" before.

    Read the article

  • How to reuse results with a schema for end of day stock-data

    - by Vishalrix
    I am creating a database schema to be used for technical analysis like top-volume gainers, top-price gainers etc.I have checked answers to questions here, like the design question. Having taken the hint from boe100 's answer there I have a schema modeled pretty much on it, thusly: Symbol - char 6 //primary Date - date //primary Open - decimal 18, 4 High - decimal 18, 4 Low - decimal 18, 4 Close - decimal 18, 4 Volume - int Right now this table containing End Of Day( EOD) data will be about 3 million rows for 3 years. Later when I get/need more data it could be 20 million rows. The front end will be asking requests like "give me the top price gainers on date X over Y days". That request is one of the simpler ones, and as such is not too costly, time wise, I assume. But a request like " give me top volume gainers for the last 10 days, with the previous 100 days acting as baseline", could prove 10-100 times costlier. The result of such a request would be a float which signifies how many times the volume as grown etc. One option I have is adding a column for each such result. And if the user asks for volume gain in 10 days over 20 days, that would require another table. The total such tables could easily cross 100, specially if I start using other results as tables, like MACD-10, MACD-100. each of which will require its own column. Is this a feasible solution? Another option being that I keep the result in cached html files and present them to the user. I dont have much experience in web-development, so to me it looks messy; but I could be wrong ( ofc!) . Is that a option too? Let me add that I am/will be using mod_perl to present the response to the user. With much of the work on mysql database being done using perl. I would like to have a response time of 1-2 seconds.

    Read the article

  • flash creates more than one http request

    - by MilanAleksic
    We are facing one issue directly connected with our Flash API we've given to a 3rd party flash vendor. To make a long story short, our API basically wraps domain logic on client and creates a single POST request towards the server in JSON format. All will be ok except in combination MacOS + Safari we receive double requests on server (?). Even more interesting, we are receiving different agent names - one is expected name/decriptor of the browser and system, other is "CFNetwork". POST /RuntimeDelegate.ashx - 80 Mozilla/5.0+(Macintosh;+U;+Intel+Mac+OS+X+10_4_11;+fr)+AppleWebKit/531.22.7+(KHTML,+like+Gecko)+Version/4.0.5+Safari/531.22.7 200 0 0 POST /RuntimeDelegate.ashx - 80 CFNetwork/129.24 200 0 0 POST /RuntimeDelegate.ashx - 80 Mozilla/5.0+(Macintosh;+U;+Intel+Mac+OS+X+10_4_11;+fr)+AppleWebKit/531.22.7+(KHTML,+like+Gecko)+Version/4.0.5+Safari/531.22.7 200 0 0 POST /RuntimeDelegate.ashx - 80 Mozilla/5.0+(Macintosh;+U;+Intel+Mac+OS+X+10_4_11;+fr)+AppleWebKit/531.22.7+(KHTML,+like+Gecko)+Version/4.0.5+Safari/531.22.7 200 0 0 POST /RuntimeDelegate.ashx - 80 CFNetwork/129.24 200 0 0 POST /RuntimeDelegate.ashx - 80 Mozilla/5.0+(Macintosh;+U;+Intel+Mac+OS+X+10_4_11;+fr)+AppleWebKit/531.22.7+(KHTML,+like+Gecko)+Version/4.0.5+Safari/531.22.7 200 0 0 POST /RuntimeDelegate.ashx - 80 CFNetwork/129.24 200 0 0 POST /RuntimeDelegate.ashx - 80 CFNetwork/129.24 200 0 0 Has anyone encounter anything like this before?

    Read the article

  • Java webapp: how to implement a web bug (1x1 pixel)?

    - by NoozNooz42
    In the accepted answer in the following question, a SO regular with 13K+ rep suggests to use a "web bug" (non-cacheable 1x1 img) to be able to track requests in the logs: http://stackoverflow.com/questions/1784893 How can I do this in Java? Basically, I've got two issues: how to make sure the 1x1 image is not cacheable (how to set the header)? how to make sure the query for these 1x1 image will appear in the logs? I'm looking for exact piece of code because I know how to write a .jsp/servlet and I know how to serve an 1x1 image :) My question is really about the exact .jsp/servlet that I should write and how/what needs to be done so that Tomcat logs the request. For example I plan to use the following mapping: <servlet-mapping> <servlet-name>WebBugServlet</servlet-name> <url-pattern>/webbug*</url-pattern> </servlet-mapping> and then use an img tag referencing a "webbug.png" (or .gif), so how do I write the .jsp/servlet? What/where should I look for in the logs?

    Read the article

  • making binned boxplot in matplotlib with numpy and scipy in Python

    - by user248237
    I have a 2-d array containing pairs of values and I'd like to make a boxplot of the y-values by different bins of the x-values. I.e. if the array is: my_array = array([[1, 40.5], [4.5, 60], ...]]) then I'd like to bin my_array[:, 0] and then for each of the bins, produce a boxplot of the corresponding my_array[:, 1] values that fall into each box. So in the end I want the plot to contain number of bins-many box plots. I tried the following: min_x = min(my_array[:, 0]) max_x = max(my_array[:, 1]) num_bins = 3 bins = linspace(min_x, max_x, num_bins) elts_to_bins = digitize(my_array[:, 0], bins) However, this gives me values in elts_to_bins that range from 1 to 3. I thought I should get 0-based indices for the bins, and I only wanted 3 bins. I'm assuming this is due to some trickyness with how bins are represented in linspace vs. digitize. What is the easiest way to achieve this? I want num_bins-many equally spaced bins, with the first bin containing the lower half of the data and the upper bin containing the upper half... i.e., I want each data point to fall into some bin, so that I can make a boxplot. thanks.

    Read the article

  • source of historical stock data

    - by rmeador
    I'm trying to make a stock market simulator (perhaps eventually growing into a predicting AI), but I'm having trouble finding data to use. I'm looking for a (hopefully free) source of historical stock market data. Ideally, it would be a very fine-grained (second or minute interval) data set with price and volume of every symbol on NASDAQ and NYSE (and perhaps others if I get adventurous). Does anyone know of a source for such info? I found this question which indicates Yahoo offers historical data in CSV format, but I've been unable to find out how to get it in a cursory examination of the site linked. I also don't like the idea of downloading the data piecemeal in CSV files... I imagine Yahoo would get upset and shut me off after the first few thousand requests. I also discovered another question that made me think I'd hit the jackpot, but unfortunately that OpenTick site seems to have closed its doors... too bad, since I think they were exactly what I wanted. I'd also be able to use data that's just open/close price and volume of every symbol every day, but I'd prefer all the data if I can get it. Any other suggestions?

    Read the article

  • Ajax heavy JS apps using excessive amounts of memory over time.

    - by Shane Reustle
    I seem to have some pretty large memory leaks in an app that I am working on. The app itself is not very complex. Every 15 seconds, the page requests approx 40kb of JSON from the server, and draws a table on the page using it. It is cheaper to draw the table over because the data is usually always new. I am attaching a few events to the table, approx 5 per line, 30 lines in the table. I used jQuery's .html() method to put the new html into the container and overwrite the existing. I do this specifically so that jQuery's special cleanup functions go in and attempt to detach all events on the elements in the element that it is overwriting. I then also delete the large variables of html once they are sent to the DOM using delete my_var. I have checked for circular references and attached events that are never cleared a few times, but never REALLY dug into it. I was wondering if someone could give me a few pointers on how to optimize a very heavy app like this. I just picked up "High Performance Javascript" by Nicholas Zakas, but didn't have much time to get into it yet. To give an idea on how much memory this is using, after 4~ hours, it is using about 420,000k on chrome, and much more on Firefox or IE. Thanks!

    Read the article

  • Manage multiple UDP calls

    - by rayman
    Hi all, I would like to have an advice for this issue: I am using Jbos 5.1.0, EJB3.0 I have system, which sending requests via UDP'S to remote modems, and suppose to wait for an answer from the target modem. the remote modems support only UDP calls, therefor I o design asynchronous mechanism. (also coz I want to request X modems parallel) this is what I try to do: all calls are retrieved from Data Base, then each call will be added as a message to JMS QUE. let's say i will set X MDB'S on that que, so I can work asynchronous. now each MDB will send UDP request to the IP-address(remote modem) which will be parsed from the que message. so basicly each MDB, which takes a message is sending a udp request to the remote modem and [b]waiting [/b]for an answer from that modem. [u]now here is the BUG:[/u] could happen a scenario where MDB will get an answer, but not from the right modem( which it requested in first place). that bad scenario cause two wrong things: a. the sender which sent the message will wait forever since the message never returned to him(it got accepted by another MDB). b. the MDB which received the message is not the right one, and probablly if it was on a "listener" mode, then it supposed to wait for an answer from diffrent sender.(else it wouldnt get any messages) so ofcourse I can handle everything with a RETRY mechanisem. so both mdb's(the one who got message from the wrong sender, and the one who never got the answer) will try again, to do thire operation with a hope that next time it will success. This is the mechanism, mybe you could tell me if there is any design pattren, or any other effective solution for this problem? Thanks, ray.

    Read the article

  • How to use a loop to download HTML with paging?

    - by Nai
    I want to loop through this URL and download the HTML. https://www.googleapis.com/customsearch/v1?key=AIzaSyAAoPQprb6aAV-AfuVjoCdErKTiJHn-4uI&cx=017576662512468239146:omuauf_lfve&q=" + searchTermFormat + "&num=10" +"&start=" + i start and num controls the paging of the URL. So if &start=2, and &num=10, it will scrape 10 results from page 2. Given that Google has a max limit of num = 10, how can I write a loop that loops through the HTML and scrape the results for the first 10 pages? This is what I have so far which just scrapes the first page. //input search term Console.WriteLine("What is your search query?:"); string searchTerm = Console.ReadLine(); //concantenate the strings using + symbol to make it URL friendly for google string searchTermFormat = searchTerm.Replace(" ", "+"); //create a new instance of Webclient and use DownloadString method from the Webclient class to extract download html WebClient client = new WebClient(); int i = 1; string Json = client.DownloadString("https://www.googleapis.com/customsearch/v1?key=AIzaSyAAoPQprb6aAV-AfuVjoCdErKTiJHn-4uI&cx=017576662512468239146:omuauf_lfve&q=" + searchTermFormat + "&num=10" + "&start=" + i); //create a new instance of JavaScriptSerializer and deserialise the desired content JavaScriptSerializer js = new JavaScriptSerializer(); GoogleSearchResults results = js.Deserialize<GoogleSearchResults>(Json); //output results to console Console.WriteLine(js.Serialize(results)); Console.ReadLine();

    Read the article

  • How to stop Android GPS using "Mobile data"

    - by prepbgg
    My app requests location updates with "minTime" set to 2 seconds. When "Mobile data" is switched on (in the phone's settings) and GPS is enabled the app uses "mobile data" at between 5 and 10 megabytes per hour. This is recorded in the ICS "Data usage" screen as usage by "Android OS". In an attempt to prevent this I have unticked Settings-"Location services"-"Google's location service". Does this refer to Assisted GPS, or is it something more than that? Whatever it is, it seems to make no difference to my app's internet access. As further confirmation that it is the GPS usage by my app that is causing the mobile data access I have observed that the internet data activity indicator on the status bar shows activity when and only when the GPS indicator is present. The only way to prevent this mobile data usage seems to be to switch "Mobile data" off, and GPS accuracy seems to be almost as good without the support of mobile data. However, it is obviously unsatisfactory to have to switch mobile data off. The only permissions in the Manifest are "android.permission.ACCESS_FINE_LOCATION" (and "android.permission.WRITE_EXTERNAL_STORAGE"), so the app has no explicit permission to use internet data. The LocationManager code is ` criteria.setAccuracy(Criteria.ACCURACY_FINE); criteria.setSpeedRequired(false); criteria.setAltitudeRequired(false); criteria.setBearingRequired(true); criteria.setCostAllowed(false); criteria.setPowerRequirement(Criteria.NO_REQUIREMENT); bestProvider = lm.getBestProvider(criteria, true); if (bestProvider != null) { lm.requestLocationUpdates(bestProvider, gpsMinTime, gpsMinDistance, this); ` The reference for LocationManager.getBestProvider says If no provider meets the criteria, the criteria are loosened ... Note that the requirement on monetary cost is not removed in this process. However, despite setting setCostAllowed to false the app still incurs a potential monetary cost. What else can I do to prevent the app from using mobile data?

    Read the article

  • HDFS some datanodes of cluster are suddenly disconnected while reducers are running

    - by user1429825
    I have 8 slave computers and 1 master computer for running Hadoop (ver 0.21) some datanodes of cluster are suddenly disconnected while I was running MapReduce code on 10GB data After all mappers finished and around 80% of reducers was processed, randomly one or more datanode disconned from network. and then the other datanodes start to disappear from network even if I killed the MapReduce job when I found some datanode was disconnected. I've tried to change dfs.datanode.max.xcievers to 4096, turned off fire-walls of all computing node, disabled selinux and increased the number of file open limit to 20000 but they didn't work at all... anyone have a idea to solve this problem? followings are error log from mapreduce 12/06/01 12:31:29 INFO mapreduce.Job: Task Id : attempt_201206011227_0001_r_000006_0, Status : FAILED java.io.IOException: Bad connect ack with firstBadLink as ***.***.***.148:20010 at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:889) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:820) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:427) and followings are logs from datanode 2012-06-01 13:01:01,118 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block blk_-5549263231281364844_3453 src: /*.*.*.147:56205 dest: /*.*.*.142:20010 2012-06-01 13:01:01,136 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(*.*.*.142:20010, storageID=DS-1534489105-*.*.*.142-20010-1337757934836, infoPort=20075, ipcPort=20020) Starting thread to transfer block blk_-3849519151985279385_5906 to *.*.*.147:20010 2012-06-01 13:01:19,135 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(*.*.*.142:20010, storageID=DS-1534489105-*.*.*.142-20010-1337757934836, infoPort=20075, ipcPort=20020):Failed to transfer blk_-5797481564121417802_3453 to *.*.*.146:20010 got java.net.ConnectException: > Connection timed out at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:701) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:373) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:1257) at java.lang.Thread.run(Thread.java:722) 2012-06-01 13:06:20,342 INFO org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Verification succeeded for blk_6674438989226364081_3453 2012-06-01 13:09:01,781 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(*.*.*.142:20010, storageID=DS-1534489105-*.*.*.142-20010-1337757934836, infoPort=20075, ipcPort=20020):Failed to transfer blk_-3849519151985279385_5906 to *.*.*.147:20010 got java.net.SocketTimeoutException: 480000 millis timeout while waiting for channel to be ready for write. ch : java.nio.channels.SocketChannel[connected local=/*.*.*.142:60057 remote=/*.*.*.147:20010] at org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:246) at org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:164) at org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:203) at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendChunks(BlockSender.java:388) at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:476) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:1284) at java.lang.Thread.run(Thread.java:722) hdfs-site.xml <configuration> <property> <name>dfs.name.dir</name> <value>/home/hadoop/data/name</value> </property> <property> <name>dfs.data.dir</name> <value>/home/hadoop/data/hdfs1,/home/hadoop/data/hdfs2,/home/hadoop/data/hdfs3,/home/hadoop/data/hdfs4,/home/hadoop/data/hdfs5</value> </property> <property> <name>dfs.replication</name> <value>3</value> </property> <property> <name>dfs.datanode.max.xcievers</name> <value>4096</value> </property> <property> <name>dfs.http.address</name> <value>0.0.0.0:20070</value> <description>50070 The address and the base port where the dfs namenode web ui will listen on. If the port is 0 then the server will start on a free port. </description> </property> <property> <name>dfs.datanode.http.address</name> <value>0.0.0.0:20075</value> <description>50075 The datanode http server address and port. If the port is 0 then the server will start on a free port. </description> </property> <property> <name>dfs.secondary.http.address</name> <value>0.0.0.0:20090</value> <description>50090 The secondary namenode http server address and port. If the port is 0 then the server will start on a free port. </description> </property> <property> <name>dfs.datanode.address</name> <value>0.0.0.0:20010</value> <description>50010 The address where the datanode server will listen to. If the port is 0 then the server will start on a free port. </description> <property> <name>dfs.datanode.ipc.address</name> <value>0.0.0.0:20020</value> <description>50020 The datanode ipc server address and port. If the port is 0 then the server will start on a free port. </description> </property> <property> <name>dfs.datanode.https.address</name> <value>0.0.0.0:20475</value> </property> <property> <name>dfs.https.address</name> <value>0.0.0.0:20470</value> </property> </configuration> mapred-site.xml <configuration> <property> <name>mapred.job.tracker</name> <value>masternode:29001</value> </property> <property> <name>mapred.system.dir</name> <value>/home/hadoop/data/mapreduce/system</value> </property> <property> <name>mapred.local.dir</name> <value>/home/hadoop/data/mapreduce/local</value> </property> <property> <name>mapred.map.tasks</name> <value>32</value> <description> default number of map tasks per job.</description> </property> <property> <name>mapred.tasktracker.map.tasks.maximum</name> <value>4</value> </property> <property> <name>mapred.reduce.tasks</name> <value>8</value> <description> default number of reduce tasks per job.</description> </property> <property> <name>mapred.map.child.java.opts</name> <value>-Xmx2048M</value> </property> <property> <name>io.sort.mb</name> <value>500</value> </property> <property> <name>mapred.task.timeout</name> <value>1800000</value> <!-- 30 minutes --> </property> <property> <name>mapred.job.tracker.http.address</name> <value>0.0.0.0:20030</value> <description> 50030 The job tracker http server address and port the server will listen on. If the port is 0 then the server will start on a free port. </description> </property> <property> <name>mapred.task.tracker.http.address</name> <value>0.0.0.0:20060</value> <description> 50060 </property> </configuration>

    Read the article

  • Stretch Background Image & Resize With Browser Window

    - by user241673
    I am trying to replicate the image resizing found at http://devkick.com/lab/fsgallery/ but with the code I have below, it is not working properly. When resizing the browser window to have small width and big height, white space shows up at the bottom of the page. feel free to see it & edit at http://jsbin.com/ifolu3 The CSS: html, body {width:100%; height:100%; overflow:hidden;} div.bg {position:absolute; width:200%; height:200%; top:-50%; left:-50%;} img.bg {min-height:50%; min-width:50%; margin:0 auto; display:block;} The JS/jQuery: $(window).resize(function(){ var ratio = Math.max($(window).width()/$('img.bg').width(),$(window).height()/$('img.bg').height()); if ($(window).width() $(window).height()) { $('img.bg').css({width:image.width()*ratio,height:'auto'}); } else { $('img.bg').css({width:'auto',height:image.height()*ratio}); } }); The HTML - (sorry for the formatting, had trouble getting "<" to show) [body] [div class="bg"] [img class="bg" src="bg.jpg" /] [/div] [/body]

    Read the article

  • Trying to write up a C daemon, but don't know enough C to continue

    - by JamesM-SiteGen
    Okay, so I want this daemon to run in the background with little to no interaction. I plan to have it work with Apache, lighttpd, etc to send the session & request information to allow C to generate a website from an object DB, saving will have to be an option so, you can start the daemon with an existing DB but will not save to it unless you login to the admin area and enable, and restart the daemon. Summary of the daemon: Load a database from file. Have a function to restart the daemon. Allow Apache, lighttpd, etc to get necessary data about the request and session. A varible to allow the database to be saved to the file, otherwise it will only be stored in ram. If it is set to save back to the file, then only have neccessary data in ram. Use sql-light for the database file. Build a webpage from some template files. $(myVar) for getting variables. Get templates from a directory. ./templates/01-test/{index.html,template.css,template.js} Live version of the code and more information: http://typewith.me/YbGB1h1g1p Also I am working on a website CMS in php, but I am tring to switch to C as it is faster than php. ( php is quite fast, but the fact that making a few mySQL requests for every webpage is quite unefficent and I'm sure that it can be far better, so an object that we can recall data from in C would have to be faster ) P.S I am using Arch-Linux not MS-Windows, with the package group base-devel for the common developer tools such as make and makepgk. Edit: Oupps, forgot the question ;) Okay, so the question is, how can I turn this basic C daemon into a base to what I am attempting to do here?

    Read the article

  • Rails3 renders a js.erb template with a text/html content-type instead of text/javascript

    - by Yannis
    Hi, I'm building a new app with 3.0.0.beta3. I simply try to render a js.erb template to an Ajax request for the following action (in publications_controller.rb): def get_pubmed_data entry = Bio::PubMed.query(params[:pmid])# searches PubMed and get entry @publication = Bio::MEDLINE.new(entry) # creates Bio::MEDLINE object from entry text flash[:warning] = "No publication found."if @publication.title.blank? and @publication.authors.blank? and @publication.journal.blank? respond_to do |format| format.js end end Currently, my get_pubmed_data.js.erb template is simply alert('<%= @publication.title %>') The server is responding with the following alert('Evidence for a herpes simplex virus-specific factor controlling the transcription of deoxypyrimidine kinase.') which is perfectly fine except that nothing happen in the browser, probably because the content-type of the response is 'text/html' instead of 'text/javascript' as shown by the response header partially reproduced here: Status 200 Keep-Alive timeout=5, max=100 Connection Keep-Alive Transfer-Encoding chunked Content-Type text/html; charset=utf-8 Is this a bug or am I missing something? Thanks for your help!

    Read the article

  • UITableView with dynamic cell heights -- what do I need to do to fix scrolling down?

    - by Ian Terrell
    I am building a teensy tiny little Twitter client on the iPhone. Naturally, I'm displaying the tweets in a UITableView, and they are of course of varying lengths. I'm dynamically changing the height of the cell based on the text quite fine: - (CGFloat)heightForTweetCellWithString:(NSString *)text { CGFloat height = Buffer + [text sizeWithFont:Font constrainedToSize:Size lineBreakMode:LineBreakMode].height; return MAX(height, MinHeight); } - (CGFloat)tableView:(UITableView *)tableView heightForRowAtIndexPath:(NSIndexPath *)indexPath { NSString *text = // get tweet text for this indexpath return [self heightForTweetCellWithString:text]; } } I'm displaying the actual tweet cell using the algorithm in the PragProg book: - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *CellIdentifier = @"TweetCell"; TweetCell *cell = (TweetCell *)[tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if (cell == nil) { cell = [self createNewTweetCellFromNib]; } cell.tweet.text = // tweet text // set other labels, etc return cell; } When I boot up, all the tweets visible display just fine. However, when I scroll down, the tweets below are quite mussed up -- it appears that once a cell has scrolled off the screen, the cell height for the one above it gets resized to be larger than it should be, and obscures part of the cell below it. When the cell reaches the top of the view, it resets itself and renders properly. Scrolling up presents no difficulties. Here is a video that shows this in action: http://screencast.com/t/rqwD9tpdltd I've tried quite a bit already: resizing the cell's frame on creation, using different identifiers for cells with different heights (i.e. [NSString stringWithFormat:@"Identifier%d", rowHeight]), changing properties in Interface Builder... If there are additional code snippets I can post, please let me know. Thanks in advance for your help!

    Read the article

  • Need a tool to search large structure text documents for words, phrases and related phrases

    - by pitosalas
    I have to keep up with structured documents containing things such as requests for proposals, government program reports, threat models and all kinds of things like that. They are in techno-legalese as I would call them: highly structured, with section numbering and 3, 4 and 5 levels of nesting. All in English I need a more efficient way to locate those paragraphs of nuggets that matter to me. So what I’d like is kind of a local document index/repository, that would allow me to have some standing queries and easily locate sections in documents that talk about my queries. Here’s an example: I’d like to load in 10 large PDF files, each of say 100 pages. Each PDF contains English text, formatted very nicely into paragraphs and sections. I’d like to specify that I am interested in “blogging platforms”, “weaknesses in Ruby”, “localization and internationalization” Ideally then look at a list that showed the section of text, the name of the document, and other information that seemed to be related to and/or include the words and phrases I specified. I am sure something like this exists. I would call it something like document indexing, document comprehension or structured searching.

    Read the article

  • ByteFlow installation Error on Windows

    - by Patrick
    Hi Folks, When I try to install ByteFlow on my Windows development machine, I got the following MySQL error, and I don't know what to do, please give me some suggestion. Thank you so much!!! E:\byteflow-5b6d964917b5>manage.py syncdb !!! Read about DEBUG in settings_local.py and then remove me !!! !!! Read about DEBUG in settings_local.py and then remove me !!! J:\Program Files\Python26\lib\site-packages\MySQLdb\converters.py:37: DeprecationWarning: the sets module is deprecated from sets import BaseSet, Set Creating table auth_permission Creating table auth_group Creating table auth_user Creating table auth_message Creating table django_content_type Creating table django_session Creating table django_site Creating table django_admin_log Creating table django_flatpage Creating table actionrecord Creating table blog_post Traceback (most recent call last): File "E:\byteflow-5b6d964917b5\manage.py", line 11, in <module> execute_manager(settings) File "J:\Program Files\Python26\lib\site-packages\django\core\management\__init__.py", line 362, in execute_manager utility.execute() File "J:\Program Files\Python26\lib\site-packages\django\core\management\__init__.py", line 303, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "J:\Program Files\Python26\lib\site-packages\django\core\management\base.py", line 195, in run_from_argv self.execute(*args, **options.__dict__) File "J:\Program Files\Python26\lib\site-packages\django\core\management\base.py", line 222, in execute output = self.handle(*args, **options) File "J:\Program Files\Python26\lib\site-packages\django\core\management\base.py", line 351, in handle return self.handle_noargs(**options) File "J:\Program Files\Python26\lib\site-packages\django\core\management\commands\syncdb.py", line 78, in handle_noargs cursor.execute(statement) File "J:\Program Files\Python26\lib\site-packages\django\db\backends\util.py", line 19, in execute return self.cursor.execute(sql, params) File "J:\Program Files\Python26\lib\site-packages\django\db\backends\mysql\base.py", line 84, in execute return self.cursor.execute(query, args) File "J:\Program Files\Python26\lib\site-packages\MySQLdb\cursors.py", line 166, in execute self.errorhandler(self, exc, value) File "J:\Program Files\Python26\lib\site-packages\MySQLdb\connections.py", line 35, in defaulterrorhandler raise errorclass, errorvalue _mysql_exceptions.OperationalError: (1071, 'Specified key was too long; max key length is 767 bytes')

    Read the article

  • How to make safe frequent DataSource switches for AbstractRoutingDataSource?

    - by serg555
    I implemented Dynamic DataSource Routing for Spring+Hibernate according to this article. I have several databases with same structure and I need to select which db will run each specific query. Everything works fine on localhost, but I am worrying about how this will hold up in real web site environment. They are using some static context holder to determine which datasource to use: public class CustomerContextHolder { private static final ThreadLocal<CustomerType> contextHolder = new ThreadLocal<CustomerType>(); public static void setCustomerType(CustomerType customerType) { Assert.notNull(customerType, "customerType cannot be null"); contextHolder.set(customerType); } public static CustomerType getCustomerType() { return (CustomerType) contextHolder.get(); } public static void clearCustomerType() { contextHolder.remove(); } } It is wrapped inside some ThreadLocal container, but what exactly does that mean? What will happen when two web requests call this piece of code in parallel: CustomerContextHolder.setCustomerType(CustomerType.GOLD); //<another user will switch customer type here to CustomerType.SILVER in another request> List<Item> goldItems = catalog.getItems(); Is every web request wrapped into its own thread in Spring MVC? Will CustomerContextHolder.setCustomerType() changes be visible to other web users? My controllers have synchronizeOnSession=true. How to make sure that nobody else will switch datasource until I run required query for current user? Thanks.

    Read the article

  • css menu <ul><li> dinamically centered or width of buttons that covers the whole page

    - by Tony Stark
    I am building a home page for my minecraft server. Probably in the following 4-6 months I will opend my second and this is why I am in trouble. My first site is 1000 pixel wide, and the second will be 1200. First big difference. My menus are dinamically generated by my php code. It checks on my databases if there is another button or it is over. These buttons can be added or removed directly online. Another big issue is the browser compatibility. In a survey I did on our previous server I had a lot of users using: chrome, internet explorer, safari and firefox. That means that I must find a solution that is compatible with most browsers. What do I have to do? I came up with this CSS, which is touch compatible, it allows menus to be swapped to the left and it is enough to set 1 parameter to fix it for every page width. Sadly it is left aligned. body, nav, ul, li, a {margin: 0; padding: 0;} body {font-family: Verdana,"Helvetica Neue", Helvetica, Arial, sans-serif; } a {text-decoration: none;} .container { max-width: 900px; margin: 0px auto 0px auto; } .toggleMenu { display: none; background: #666; padding: 10px 15px; color: #999999; } .nav { border: 1px solid #424242; background-color: #121212; filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#686868', endColorstr='#121212'); background-image: -moz-linear-gradient(#686868, #121212); background-image: -webkit-gradient(linear, left top, left bottom, from(#686868), to(#121212)); background-image: -webkit-linear-gradient(#686868, #121212); background-image: -o-linear-gradient(#686868, #121212); background-image: -ms-linear-gradient(#686868, #121212); background-image: linear-gradient(#686868, #121212); -moz-box-shadow: 0 1px 1px #777, 0 1px 0 #666 inset; -webkit-box-shadow: 0 1px 1px #777, 0 1px 0 #666 inset; box-shadow: 0 1px 1px #777, 0 1px 0 #666 inset; list-style: none; *zoom: 1; position: relative; } .nav:before,.nav:after { content: " "; display: table; } .nav:after { clear: both; } .nav ul { list-style: none; width: 11em; z-index: 1; background-color: #121212; -moz-box-shadow: 0 -1px rgba(255,255,255,.3); -webkit-box-shadow: 0 -1px 0 rgba(255,255,255,.3); box-shadow: 0 -1px 0 rgba(255,255,255,.3); } .nav a { padding: 10px 15px; color:#999999; text-transform: uppercase; font: bold 11px Arial, Helvetica; text-decoration: none; text-shadow: 0 1px 0 #000; *zoom: 1; } .nav a:hover{ color:#000000; background-color: #B2B2B2; filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#D3D3D3', endColorstr='#B2B2B2'); background-image: -moz-linear-gradient(#D3D3D3, #B2B2B2); background-image: -webkit-gradient(linear, left top, left bottom, from(#D3D3D3), to(#B2B2B2)); background-image: -webkit-linear-gradient(#D3D3D3, #B2B2B2); background-image: -o-linear-gradient(#D3D3D3, #B2B2B2); background-image: -ms-linear-gradient(#D3D3D3, #B2B2B2); background-image: linear-gradient(#D3D3D3, #B2B2B2); } /*Delimitazione di ogni tab | HOME | */ .nav li { position: relative; border-right: 1px solid #424242; -moz-box-shadow: 1px 0 0 #686868; -webkit-box-shadow: 1px 0 0 #686868; box-shadow: 1px 0 0 #686868; } .nav > li { float: left; border-top: 1px solid #424242; z-index: 200; } .nav > li > .parent { background-image: url("../downArrow.png"); background-repeat: no-repeat; background-position: center right; } .nav > li li > .parent { background-image: url("../rightArrow.png"); background-repeat: no-repeat; background-position: center right; } .nav > li > a { display: block; } .nav li ul { position: absolute; left: -9999px; z-index: 100; } /* freccetta che indica un sottomenu nell'ultimo tab */ .nav > li:last-child li > .parent{ background-image: url("../leftArrow.png"); background-repeat: no-repeat; background-position: left; } /*flip subsubmenu*/ .nav li.last.hover > ul { left:auto; right: 0; } .nav > li.hover > ul { left: 0; } .nav li li.hover > ul { left: 100%; top: 0; } /* Spostare il 2^ sottomenu a sinistra */ .nav li.last li.hover ul { left:auto; right: 100%; top:0; } .nav li li a { display: block; background-color: #686868; -moz-box-shadow: 0 -1px rgba(255,255,255,.3); -webkit-box-shadow: 0 -1px 0 rgba(255,255,255,.3); box-shadow: 0 -1px 0 rgba(255,255,255,.3); z-index:100; border-top: 1px solid #686868; } .nav li li li a { background-color: #686868; -moz-box-shadow: 0 -1px rgba(255,255,255,.3); -webkit-box-shadow: 0 -1px 0 rgba(255,255,255,.3); box-shadow: 0 -1px 0 rgba(255,255,255,.3); z-index:200; border-top: 1px solid #686868; } .nav li li li li a { display: block; background-color: #686868; -moz-box-shadow: 0 -1px rgba(255,255,255,.3); -webkit-box-shadow: 0 -1px 0 rgba(255,255,255,.3); box-shadow: 0 -1px 0 rgba(255,255,255,.3); z-index:300; border-top: 1px solid #686868; } .nav li li li li a { background-color: #686868; -moz-box-shadow: 0 -1px rgba(255,255,255,.3); -webkit-box-shadow: 0 -1px 0 rgba(255,255,255,.3); box-shadow: 0 -1px 0 rgba(255,255,255,.3); z-index:400; border-top: 1px solid #686868; } @media screen and (max-width: 768px) { .active { display: block; } .nav > li { float: none; } .nav > li > .parent { background-position: 95% 50%; } .nav li li .parent { background-image: url("../downArrow.png"); background-repeat: no-repeat; background-position: 95% 50%; } .nav ul { display: block; width: 100%; } .nav > li.hover > ul , .nav li li.hover ul { position: static; } } My girlfriend (who adapted this code) is really busy for school and cannot help me. Leaving the borders on the whole square (page width), is it possible to make buttons cover the page width dinamically? Or is it possible to center the buttons? Thank you very much!

    Read the article

  • POST xml to php with apache2

    - by Berry
    I'm working on an application that receives XML data via POST, processes it with a PHP script, and returns an XML response. I'm getting the XML with this PHP code: $requestStr = file_get_contents('php://input'); $requests = simplexml_load_string($requestStr); which works fine on the Linux-based product hardware using nginx as the server. However, for testing I'd like to be able to run it on my MacBook Pro, so I can avoid the "build image, install on product, reboot product, wait, test change" loop while I do targeted development on this XML processor. I enabled "web sharing" which starts up Apache, added a rewrite rule to point a convenient URI at my development source directory and used curl to send a request to my PHP script thus: curl -H "Content-Type:text/xml" -d @request.xml http://localhost/test/path/testscript "testscript" is handled by the PHP script fine, but when it goes to read "php:://input" I get nothing -- the empty string. Anyone have a clue why this would work under Linux with nginx and not under MacOS with Apache? I've googled and searched stackoverlow.com to no avail. Thanks for any answers. UPDATE: I've discovered that at least in this configuration, reading from php://stdin will work fine, while php://input will not. Who knew?

    Read the article

  • Query with many CASE statements - optimization

    - by Nemanja Vujacic
    Hi guys, I have one very dirty query that per sure can be optimized because there are so many CASE statements in it! SELECT (CASE pa.KplusTable_Id WHEN 1 THEN sp.sp_id WHEN 2 THEN fw.fw_id WHEN 3 THEN s.sw_Id WHEN 4 THEN id.ia_id END) as Deal_Id, max(CASE pa.KplusTable_Id WHEN 1 THEN sp.Trans_Id WHEN 2 THEN fw.Trans_Id WHEN 3 THEN s.Trans_Id WHEN 4 THEN id.Trans_Id END) as TransId_CurrentMax INTO #MaxRazlicitOdNull FROM #PotencijalniAktuelni pa LEFT JOIN kplus_sp sp (nolock) on sp.sp_id=pa.Deal_Id AND pa.KplusTable_Id=1 LEFT JOIN kplus_fw fw (nolock) on fw.fw_id=pa.Deal_Id AND pa.KplusTable_Id=2 LEFT JOIN dev_sw s (nolock) on s.sw_Id=pa.Deal_Id AND pa.KplusTable_Id=3 LEFT JOIN kplus_ia id (nolock) on id.ia_id=pa.Deal_Id AND pa.KplusTable_Id=4 WHERE isnull(CASE pa.KplusTable_Id WHEN 1 THEN sp.BROJ_TIKETA WHEN 2 THEN fw.BROJ_TIKETA WHEN 3 THEN s.tiket WHEN 4 THEN id.BROJ_TIKETA END, '')<>'' GROUP BY CASE pa.KplusTable_Id WHEN 1 THEN sp.sp_id WHEN 2 THEN fw.fw_id WHEN 3 THEN s.sw_Id WHEN 4 THEN id.ia_id END Because I have same condition couple times, do you have idea how to optimize query, make it simpler and better. All suggestions are welcome! TnX in advance! Nemanja

    Read the article

  • How can I properly handle 404s in ASP.NET MVC?

    - by Brian
    I am just getting started on ASP.NET MVC so bear with me. I've searched around this site and various others and have seen a few implementations of this. EDIT: I forgot to mention I am using RC2 Using URL Routing: routes.MapRoute( "Error", "{*url}", new { controller = "Errors", action = "NotFound" } //404s ); The above seems to take care of requests like this (assuming default route tables setup by initial MVC project): "/blah/blah/blah/blah" Overriding HandleUnknownAction() in the controller itself: //404s - handle here (bad action requested protected override void HandleUnknownAction(string actionName) { ViewData["actionName"] = actionName; View("NotFound").ExecuteResult(this.ControllerContext); } However the previous strategies do not handle a request to a Bad/Unknown controller. For example, I do not have a "/IDoNotExist", if I request this I get the generic 404 page from the web server and not my 404 if I use routing + override. So finally, my question is: Is there any way to catch this type of request using a route or something else in the MVC framework itself? OR should I just default to using Web.Config customErrors as my 404 handler and forget all this? I assume if I go with customErrors I'll have to store the generic 404 page outside of /Views due to the Web.Config restrictions on direct access. Anyway any best practices or guidance is appreciated.

    Read the article

  • Simplest PHP Routing framework .. ?

    - by David
    I'm looking for the simplest implementation of a routing framework in PHP, in a typical PHP environment (Running on Apache, or maybe nginx) .. It's the implementation itself I'm mostly interested in, and how you'd accomplish it. I'm thinking it should handle URL's, with the minimal rewriting possible, (is it really a good idea, to have the same entrypoint for all dynamic requests?!), and it should not mess with the querystring, so I should still be able to fetch GET params with $_GET['var'] as you'd usually do.. So far I have only come across .htaccess solutions that puts everything through an index.php, which is sort of okay. Not sure if there are other ways of doing it. How would you "attach" what URL's fit to what controllers, and the relation between them? I've seen different styles. One huge array, with regular expressions and other stuff to contain the mapping. The one I think I like the best is where each controller declares what map it has, and thereby, you won't have one huge "global" map, but a lot of small ones, each neatly separated. So you'd have something like: class Root { public $map = array( 'startpage' => 'ControllerStartPage' ); } class ControllerStartPage { public $map = array( 'welcome' => 'WelcomeControllerPage' ); } // Etc ... Where: 'http://myapp/' // maps to the Root class 'http://myapp/startpage' // maps to the ControllerStartPage class 'http://myapp/startpage/welcome' // maps to the WelcomeControllerPage class 'http://myapp/startpage/?hello=world' // Should of course have $_GET['hello'] == 'world' What do you think? Do you use anything yourself, or have any ideas? I'm not interested in huge frameworks already solving this problem, but the smallest possible implementation you could think of. I'm having a hard time coming up with a solution satisfying enough, for my own taste. There must be something pleasing out there that handles a sane bootstrapping process of a PHP application without trying to pull a big magic hat over your head, and force you to use "their way", or the highway! ;)

    Read the article

  • "Session is Closed!" - NHibernate

    - by Alexis Abril
    This is in a web application environment: An initial request is able to successfully complete, however any additional requests return a "Session is Closed" response from the NHibernate framework. I'm using a HttpModule approach with the following code: public class MyHttpModule : IHttpModule { public void Init(HttpApplication context) { context.EndRequest += ApplicationEndRequest; context.BeginRequest += ApplicationBeginRequest; } public void ApplicationBeginRequest(object sender, EventArgs e) { CurrentSessionContext.Bind(SessionFactory.Instance.OpenSession()); } public void ApplicationEndRequest(object sender, EventArgs e) { ISession currentSession = CurrentSessionContext.Unbind( SessionFactory.Instance); currentSession.Dispose(); } public void Dispose() { } } SessionFactory.Instance is my singleton implementation, using FluentNHibernate to return an ISessionFactory object. In my repository class, I attempt to use the following syntax: public class MyObjectRepository : IMyObjectRepository { public MyObject GetByID(int id) { using (ISession session = SessionFactory.Instance.GetCurrentSession()) return session.Get<MyObject>(id); } } This allows code in the application to be called as such: IMyObjectRepository repo = new MyObjectRepository(); MyObject obj = repo.GetByID(1); I have a suspicion my repository code is to blame, but I'm not 100% sure on the actual implementation I should be using. I found a similar issue on SO here. I too am using WebSessionContext in my implementation, however, no solution was provided other than writing a custom SessionManager. For simple CRUD operations, is a custom session provider required apart from the built in tools(ie WebSessionContext)?

    Read the article

  • What are the responsibilities of the data layer?

    - by alimac83
    I'm working on a project where I had to add a data layer to my application. I've always thought that the data layer is purely responsible for CRUD functions ie. shouldn't really contain any logic but should simply retrieve data for the business layer to manipulate. However I'm a little confused with my project because I'm not sure whether I've structured my app correctly for this scenario. Basically I'm trying to retrieve a list of products from the database that fall within a certain pricing threshold. At the moment I have a function in my data layer that basically returns all products where price min threshold and price < max threshold. But it got me thinking that maybe this is incorrect. Should the data layer simply return a list of ALL products and then the business logic do the filtering? I'm pretty confused over whether the data layer should simply provide methods that allow the business layer to get raw data or whether it should be responsible for getting filtered data too? If anyone has an article or something explaining this in detail it'd be very helpful. Thanks

    Read the article

< Previous Page | 356 357 358 359 360 361 362 363 364 365 366 367  | Next Page >