Search Results

Search found 5176 results on 208 pages for 'max fraser'.

Page 168/208 | < Previous Page | 164 165 166 167 168 169 170 171 172 173 174 175  | Next Page >

  • How to send HTTP get method with headers using CURL

    - by mithunmo
    Hello , I need to send GET Request method with the below headers . I am getting the following capture from HTTP live headers ***http://172.20.22.26/ GET / HTTP/1.1 Host: 172.20.22.26 User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.0.1) Gecko/2008070208 Firefox/3.0.1 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 300 Connection: keep-alive Authorization: Basic bWl0aHVuOm1pdGh1bg== HTTP/1.x 200 OK Date: Thu, 01 Jan 2009 00:29:20 GMT Server: HTTPsrv Connection: Keep-Alive Keep-Alive: timeout=30, max=100 Transfer-Encoding: chunked Content-Type: text/html ----------------------------**------------------------------* I am using the following program . It is not working . Please let me know where I am going wrong. <?php $credentials = "mithun:mithun"; $url = "http://172.20.22.26"; $headers = array( "GET /HTTP/1.1", "User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.0.1) Gecko/2008070208 Firefox/3.0.1", "Content-type: text/xml;charset=\"utf-8\"", "Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8", "Accept-Language: en-us,en;q=0.5", "Accept-Encoding: gzip,deflate", "Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7", "Keep-Alive: 300", "Connection: keep-alive", "Authorization: Basic " . base64_encode($credentials)); $ch = curl_init(); curl_setopt($ch, CURLOPT_URL,$url); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt($ch, CURLOPT_TIMEOUT, 60); curl_setopt($ch, CURLOPT_HTTPHEADER, $headers); //curl_setopt($ch, CURLOPT_USERAGENT, $defined_vars['HTTP_USER_AGENT']); $data = curl_exec($ch); if (curl_errno($ch)) { print "Error: " . curl_error($ch); } else { // Show me the result var_dump($data); curl_close($ch); }?>

    Read the article

  • SQL deadlock on delete then bulk insert

    - by StarLite
    I have an issue with a deadlock in SQL Server that I haven't been able to resolve. Basically I have a large number of concurrent connections (from many machines) that are executing transactions where they first delete a range of entries and then re-insert entries within the same range with a bulk insert. Essentially, the transaction looks like this BEGIN TRANSACTION T1 DELETE FROM [TableName] WITH( XLOCK HOLDLOCK ) WHERE [Id]=@Id AND [SubId]=@SubId INSERT BULK [TableName] ( [Id] Int , [SubId] Int , [Text] VarChar(max) COLLATE SQL_Latin1_General_CP1_CI_AS ) WITH(CHECK_CONSTRAINTS, FIRE_TRIGGERS) COMMIT TRANSACTION T1 The bulk insert only inserts items matching the Id and SubId of the deletion in the same transaction. Furthermore, these Id and SubId entries should never overlap. When I have enough concurrent transaction of this form, I start to see a significant number of deadlocks between these statements. I added the locking hints XLOCK HOLDLOCK to attempt to deal with the issue, but they don't seem to be helpling. The canonical deadlock graph for this error shows: Connection 1: Holds RangeX-X on PK_TableName Holds IX Page lock on the table Requesting X Page lock on the table Connection 2: Holds IX Page lock on the table Requests RangeX-X lock on the table What do I need to do in order to ensure that these deadlocks don't occur. I have been doing some reading on the RangeX-X locks and I'm not sure I fully understand what is going on with these. Do I have any options short of locking the entire table here?

    Read the article

  • Simple, fast SQL queries for flat files.

    - by plinehan
    Does anyone know of any tools to provide simple, fast queries of flat files using a SQL-like declarative query language? I'd rather not pay the overhead of loading the file into a DB since the input data is typically thrown out almost immediately after the query is run. Consider the data file, "animals.txt": dog 15 cat 20 dog 10 cat 30 dog 5 cat 40 Suppose I want to extract the highest value for each unique animal. I would like to write something like: cat animals.txt | foo "select $1, max(convert($2 using decimal)) group by $1" I can get nearly the same result using sort: cat animals.txt | sort -t " " -k1,1 -k2,2nr And I can always drop into awk from there, but this all feels a bit awkward (couldn't resist) when a SQL-like language would seem to solve the problem so cleanly. I've considered writing a wrapper for SQLite that would automatically create a table based on the input data, and I've looked into using Hive in single-processor mode, but I can't help but feel this problem has been solved before. Am I missing something? Is this functionality already implemented by another standard tool? Halp!

    Read the article

  • Exclude string from wildcard in bash

    - by Peter O'Doherty
    Hi, I'm trying to adapt a bash script from "Sams' Teach Yourself Linux in 24 Hours" which is a safe delete command called rmv. The files are removed by calling rmv -d file1 file2 etc. In the original script a max of 4 files can by removed using the variables $1 $2 $3 $4. I want to extend this to an unlimited number of files by using a wildcard. So I do: for i in $* do mv $i $HOME/.trash done The files are deleted okay but the option -d of the command rmv -d is also treated as an argument and bash objects that it cannot be found. Is there a better way to do this? Thanks, Peter #!/bin/bash # rmv - a safe delete program # uses a trash directory under your home directory mkdir $HOME/.trash 2>/dev/null # four internal script variables are defined cmdlnopts=false delete=false empty=false list=false # uses getopts command to look at command line for any options while getopts "dehl" cmdlnopts; do case "$cmdlnopts" in d ) /bin/echo "deleting: \c" $2 $3 $4 $5 ; delete=true ;; e ) /bin/echo "emptying the trash..." ; empty=true ;; h ) /bin/echo "safe file delete v1.0" /bin/echo "rmv -d[elete] -e[mpty] -h[elp] -l[ist] file1-4" ;; l ) /bin/echo "your .trash directory contains:" ; list=true ;; esac done if [ $delete = true ] then for i in $* do mv $i $HOME/.trash done /bin/echo "rmv finished." fi if [ $empty = true ] then /bin/echo "empty the trash? \c" read answer case "$answer" in y) rm -i $HOME/.trash/* ;; n) /bin/echo "trashcan delete aborted." ;; esac fi if [ $list = true ] then ls -l $HOME/.trash fi

    Read the article

  • one page has more than one url. search engines give penalty please help me out.

    - by Ali Demirtas
    Hi I am using prestashop as the cart for my website. I have a problem; the website used to be in dynamic urls. I enabled friendly url writing. The problem is that one page has more than one url. You can access a same page from the dynamic url and static url. In fact a single page has 9 different urls. This obviously creates problems for seo as search engiones penalize my website for this. What can I do to solve this problem? I have no knowledge of programming. Here is the htaccess for the website. Any sample code or help is really appreciated. URL rewriting module activation RewriteEngine on URL rewriting rules RewriteRule ^([a-z0-9]+)-([a-z0-9]+)(-[_a-zA-Z0-9-]*)/([_a-zA-Z0-9-]*).jpg$ /img/p/$1-$2$3.jpg [L,E] RewriteRule ^([0-9]+)-([0-9]+)/([_a-zA-Z0-9-]*).jpg$ /img/p/$1-$2.jpg [L,E] RewriteRule ^([0-9]+)(-[_a-zA-Z0-9-]*)/([_a-zA-Z0-9-]).jpg$ /img/c/$1$2.jpg [L,E] RewriteRule ^lang-([a-z]{2})/([a-zA-Z0-9-])/([0-9]+)-([a-zA-Z0-9-]).html(.)$ /product.php?id_product=$3&isolang=$1$5 [L,E] RewriteRule ^lang-([a-z]{2})/([0-9]+)-([a-zA-Z0-9-]).html(.)$ /product.php?id_product=$2&isolang=$1$4 [L,E] RewriteRule ^lang-([a-z]{2})/([0-9]+)-([a-zA-Z0-9-])(.)$ /category.php?id_category=$2&isolang=$1 [QSA,L,E] RewriteRule ^([a-zA-Z0-9-])/([0-9]+)-([a-zA-Z0-9-]).html(.*)$ /product.php?id_product=$2$4 [L,E] RewriteRule ^([0-9]+)-([a-zA-Z0-9-]).html(.)$ /product.php?id_product=$1$3 [L,E] RewriteRule ^([0-9]+)-([a-zA-Z0-9-])(.)$ /category.php?id_category=$1 [QSA,L,E] RewriteRule ^content/([0-9]+)-([a-zA-Z0-9-])(.)$ /cms.php?id_cms=$1 [QSA,L,E] RewriteRule ^([0-9]+)__([a-zA-Z0-9-])(.)$ /supplier.php?id_supplier=$1$3 [QSA,L,E] RewriteRule ^([0-9]+)_([a-zA-Z0-9-])(.)$ /manufacturer.php?id_manufacturer=$1$3 [QSA,L,E] RewriteRule ^lang-([a-z]{2})/(.*)$ /$2?isolang=$1 [QSA,L,E] Catch 404 errors ErrorDocument 404 /404.php Options +FollowSymLinks RewriteEngine on RewriteCond %{HTTP_HOST} ^.com [NC] RewriteRule ^(.)$ http://www.*.com/$1 [L,R=301] Options +FollowSymLinks RewriteEngine on index.php to / RewriteCond %{THE_REQUEST} ^[A-Z]{3,9}\ /.index.php\ HTTP/ RewriteRule ^(.)index.php$ /$1 [R=301,L] Header set Cache-Control: "no-store, no-cache, must-revalidate, pre-check=0, post-check=0, max-age=0"

    Read the article

  • pooling with Windsor

    - by AlonEl
    I've tried out the pooling lifestyle with Windsor. Lets say I want multiple CustomerTasks to work with a pool of ILogger's. when i try resolving more times than maxPoolSize, new loggers keeps getting created. what am i missing and what exactly is the meaning of min and max pool size? the xml configuration i use is (demo code): <component id="customertasks" type="WindsorTest.CustomerTasks, WindsorTestCheck" lifestyle="transient" /> <component id="logger.console" service="WindsorTest.ILogger, WindsorTestCheck" type="WindsorTest.ConsoleLogger, WindsorTestCheck" lifestyle="pooled" initialPoolSize="2" maxPoolSize="5" /> Code is: public interface ILogger { void Log(string message); } public class ConsoleLogger : ILogger { private static int count = 0; public ConsoleLogger() { Console.WriteLine("Hello from constructor number:" + count); count++; } public void Log(string message) { Console.WriteLine(message); } } public class CustomerTasks { private readonly ILogger logger; public CustomerTasks(ILogger logger) { this.logger = logger; } public void SaveCustomer() { logger.Log("Saved customer"); } }

    Read the article

  • Draw Rectangle with XNA

    - by mazzzzz
    Hey guys, I was working on game, and wanted to highlight a spot on the screen when something happens, I created a class to do this for me, and found a bit of code to draw the rectangle static private Texture2D CreateRectangle(int width, int height, Color colori) { Texture2D rectangleTexture = new Texture2D(game.GraphicsDevice, width, height, 1, TextureUsage.None, SurfaceFormat.Color);// create the rectangle texture, ,but it will have no color! lets fix that Color[] color = new Color[width * height];//set the color to the amount of pixels in the textures for (int i = 0; i < color.Length; i++)//loop through all the colors setting them to whatever values we want { color[i] = colori; } rectangleTexture.SetData(color);//set the color data on the texture return rectangleTexture;//return the texture } Problem is that the code above is called every update, (60 times a second), and it was not written with optimization in mind, I have no clue how else to write a code to do this though. It needs to be extremely fast (the code above freezes the game, which has only skeleton code right now).. Any suggestions. Note: Any new code would be great (WireFrame/Fill are both fine). I would like to be able to specify color. Something to point in the right direction would be great, Thanks, Max

    Read the article

  • Extending timeout and message size in WCF service generated by Biztalk 2006 R2

    - by Sergej Andrejev
    Hi, I'm generating WCF service using Biztalk. The code I get is this: <system.serviceModel> <behaviors> <serviceBehaviors> <behavior name="ServiceBehaviorConfiguration"> <serviceDebug httpHelpPageEnabled="true" httpsHelpPageEnabled="false" includeExceptionDetailInFaults="false" /> <serviceMetadata httpGetEnabled="true" httpsGetEnabled="false" externalMetadataLocation="" /> </behavior> </serviceBehaviors> </behaviors> <services> <!-- Note: the service name must match the configuration name for the service implementation. --> <service name="Microsoft.BizTalk.Adapter.Wcf.Runtime.BizTalkServiceInstance" behaviorConfiguration="ServiceBehaviorConfiguration"> <endpoint name="HttpMexEndpoint" address="mex" binding="mexHttpBinding" bindingConfiguration="" contract="IMetadataExchange" /> <!--<endpoint name="HttpsMexEndpoint" address="mex" binding="mexHttpsBinding" bindingConfiguration="" contract="IMetadataExchange" />--> </service> </services> </system.serviceModel> Maybe it's not the most beautifull configuration, but it works. The problem is I don't know how to modify timeouts and message max size, because it has only mex endpoint. I'm surprised how this works at all with just mex endpoint. So two questions are: Why does this works at all? What should I add to extend timeouts and message size?

    Read the article

  • NSUserDefaults not saving properly

    - by BWC
    Hey Guys I'm having issues with NSUserDefaults and I don't quite understand what's going on My App has 5 levels and each level does the exact same thing with NSUserDefaults (Retrieves the levels defaults, changes the value as the user plays the level and then sets the defaults and syncronizes at the end of the level) the first 4 levels...work without a hitch but the last level doesn't save the values. The app doesn't crash and the last level isn't the very last thing that happens, And I even have the defaults synchronized when the application terminates. Is there a max size on the NSUserDefaults or is there anything anyone can think of that I haven't, I'll post the code below but like I said the first four levels work perfectly //header NSUserDefaults *userData; @property(nonatomic,retain) NSUserDefaults *userData; //class file //Sets the boolean variables for the class to use userData = [NSUserDefaults standardUserDefaults]; boolOne = [userData boolForKey:@"LevelFiveBoolOne"]; boolTwo = [userData boolForKey:@"LevelFiveBoolTwo"]; boolThree = [userData boolForKey:@"LevelFiveBoolThree"]; boolFour = [userData boolForKey:@"LevelFiveBoolFour"]; boolFive = [userData boolForKey:@"LevelFiveBoolFive"]; boolSix = [userData boolForKey:@"LevelFiveBoolSix"]; boolSeven = [userData boolForKey:@"LevelFiveBoolSeven"]; //End Of Level [userData setBool:boolOne forKey:@"LevelFiveBoolOne"]; [userData setBool:boolTwo forKey:@"LevelFiveBoolTwo"]; [userData setBool:boolThree forKey:@"LevelFiveBoolThree"]; [userData setBool:boolFour forKey:@"LevelFiveBoolFour"]; [userData setBool:boolFive forKey:@"LevelFiveBoolFive"]; [userData setBool:boolSix forKey:@"LevelFiveBoolSix"]; [userData setBool:boolSeven forKey:@"LevelFiveBoolSeven"]; [userData synchronize]; When when I switch to the view that uses these defaults they values are correct but when I terminate the application and restart it, these values aren't saved, every other level does the exact same process this is the only level that doesn't work. I've stared at this for quite awhile and I'm hoping someone out there has run into the same problem and can give me some insight on how they resolved it. Thank you in Advance BWC

    Read the article

  • Are Large iPhone Ping Times Indicative of Application Latency?

    - by yar
    I am contemplating creating a realtime app where an iPod Touch/iPhone/iPad talks to a server-side component (which produces MIDI, and sends it onward within the host). When I ping my iPod Touch on Wifi I get huge latency (and a enormous variance, too): 64 bytes from 192.168.1.3: icmp_seq=9 ttl=64 time=38.616 ms 64 bytes from 192.168.1.3: icmp_seq=10 ttl=64 time=61.795 ms 64 bytes from 192.168.1.3: icmp_seq=11 ttl=64 time=85.162 ms 64 bytes from 192.168.1.3: icmp_seq=12 ttl=64 time=109.956 ms 64 bytes from 192.168.1.3: icmp_seq=13 ttl=64 time=31.452 ms 64 bytes from 192.168.1.3: icmp_seq=14 ttl=64 time=55.187 ms 64 bytes from 192.168.1.3: icmp_seq=15 ttl=64 time=78.531 ms 64 bytes from 192.168.1.3: icmp_seq=16 ttl=64 time=102.342 ms 64 bytes from 192.168.1.3: icmp_seq=17 ttl=64 time=25.249 ms Even if this is double what the iPhone-Host or Host-iPhone time would be, 15ms+ is too long for the app I'm considering. Is there any faster way around this (e.g., USB cable)? If not, would building the app on Android offer any other options? Traceroute reports more workable times: traceroute to 192.168.1.3 (192.168.1.3), 64 hops max, 52 byte packets 1 192.168.1.3 (192.168.1.3) 4.662 ms 3.182 ms 3.034 ms can anyone decipher this difference between ping and traceroute for me, and what they might mean for an application that needs to talk to (and from) a host?

    Read the article

  • text-area-text-to-be-split-with-conditions repeated

    - by desmiserables
    I have a text area wherein i have limited the user from entering more that 15 characters in one line as I want to get the free flow text separated into substrings of max limit 15 characters and assign each line an order number. This is what I was doing in my java class: int interval = 15; items = new ArrayList(); TextItem item = null; for (int i = 0; i < text.length(); i = i + interval) { item = new TextItem (); item.setOrder(i); if (i + interval < text.length()) { item.setSubText(text.substring(i, i + interval)); items.add(item); } else { item.setSubText(text.substring(i)); items.add(item); } } Now it works properly unless the user presses the enter key. Whenever the user presses the enter key I want to make that line as a new item having only that part as the subText. I can check whether my text.substring(i, i + interval) contains any "\n" and split till there but the problem is to get the remaining characters after "\n" till next 15 or till next "\n" and set proper order and subText.

    Read the article

  • Shopify JSONP issue in ajaxAPI

    - by Aaron U
    I'm getting some odd response back from shopify ajaxapi for jsonp. If you cURL a Shopify ajax api location http://storename.domain.com/cart.json?callback=handler you will get a jsonp response. But something is breaking the same request in browsers. It appears to be related to compression? Here are some responses from each browser when attempting to call the jsonp as documented. Firefox: The page you are trying to view cannot be shown because it uses an invalid or unsupported form of compression. Internet Explorer: Internet Explorer cannot display the webpage Chrome/Safari/Webkit: Cannot decode raw data, or failed (chrome) Attempted use via jquery: $.getJSON('http://storename.domain.com/cart.json?callback=?', function(data) { ... }); // Results in a failed request, viewable network request panels of dev tools Here is some output from cURL including response headers: $ curl -i http://storename.domain.com/cart.json?callback=CALLBACK_FUNC HTTP/1.1 200 OK Server: nginx Date: Tue, 18 Dec 2012 13:48:29 GMT Content-Type: application/javascript; charset=utf-8 Transfer-Encoding: chunked Connection: keep-alive Status: 200 OK ETag: cachable:864076445587123764313132415008994143575 Cache-Control: max-age=0, private, must-revalidate X-Alternate-Cache-Key: cachable:11795444887523410552615529412743919200 X-Cache: hit, server X-Request-Id: a0c33a55230fe42bce79b462f6fe450d X-UA-Compatible: IE=Edge,chrome=1 Set-Cookie: _session_id=b6ace1d7b0dbedd37f7787d10e173131; path=/; HttpOnly X-Runtime: 0.033811 P3P: CP="NOI DSP COR NID ADMa OPTa OUR NOR" CALLBACK_FUNC({"token":null,"note":null,"attributes":{},"total_price":0,...}) Also related unanswered here: Shopify Ajax API JSONP supported? Thanks

    Read the article

  • Out of Core Implementation of a Quadtree

    - by Nima
    Hi, I am trying to build a Quadtree data structure(or let's just say a tree) on the secondary memory(Hard Disk). I have a C++ program to do so and I use fopen to create the files. Also, I am using tesseral coding to store each cell in a file named with its corresponding code to store it on the disk in one directory. The problem is that after creating about 1,100 files, fopen just returns NULL and stops creating new files. I can create further files manually in that directory, but using C++ it can not create any further files. I know about max limit of inode on ext3 filesystem which is (from Wikipedia) 32,000 but mine is way less than that, also note that I can create files manually on the disk; just not through fopen. Also, I really appreciate any idea regarding the best way to store a very dynamic quadtree on disk(I need the nodes to be in separate files and the quadtree might have a depth of 50). Using nested directories is one idea, but I think it will slow down the performance because of following the links on the filesystem to access the file. Thanks, Nima

    Read the article

  • The "first past the post election" query problem

    - by MPelletier
    This problem may seem like school work, but it isn't. At best it is self-imposed school work. I encourage any teachers to take is as an example if they wish. "First past the post" elections are single-round, meaning that whoever gets the most votes win, no second rounds. Suppose a table for an election. CREATE TABLE ElectionResults ( DistrictHnd INTEGER NOT NULL, PartyHnd INTEGER NOT NULL, CandidateName VARCHAR2(100) NOT NULL, TotalVotes INTEGER NOT NULL, PRIMARY KEY DistrictHnd, PartyHnd); The table has two foreign keys: DistrictHnd points to a District table (lists all the different electoral districts) and PartyHnd points to a Party table (lists all the different political parties). I won't bother with other tables here, joining them is trivial. This is just a wee bit of context. The question: What SQL query will return a table listing the DistrictHnd, PartyHnd, CandidateName and TotalVotes of the winners (max votes) in each District? This does not suppose any particular database system. If you wish to stick to a particular implementation of SQL, go the way of SQLite and MySQL. If you can devise a better schema (or an easier one), that is acceptable too. Criteria: simplicity, portability to other databases.

    Read the article

  • Multiple connections in a single SSH SOCKS 5 Proxy

    - by Elie Zedeck
    Hey guys, My fist question here on Stackoverflow: What should I need to do so that the SSH SOCKS 5 Proxy (SSH2) will allow multiple connections? What I have noticed, is that when I load a page in Firefox (already configured to use the SOCKS 5 proxy), it loads everything one by one. It can be perceived by bare eyes, and I also confirm that through the use of Firebug's NET tab, which logs the connections that have been made. I have already configure some of the directives in the about:config page, like pipeline, persistent proxy connections, and a few other things. But I still get this kind of sequential load of resources, which is noticeably very slow. network.http.pipelining;true network.http.pipelining.maxrequests;8 network.http.pipelining.ssl;true network.http.proxy.pipelining;true network.http.max-persistent-connections-per-proxy;100 network.proxy.socks_remote_dns;true My ISP sucks because during the day, it intentionally breaks connections on a random basis. And so, it is impossible to actually accomplish meaningful works without the need of a lot of browser refresh or hitting F5 key. So, that is why I started to find solutions to this. The SSH's dynamic port forwarding is the best solution I find to date, because it has some pretty good compression which saves a lot of useless traffic, and is also secure. The only thing remaining is to get it to have multiple connections running in it. Thanks for all the inputs.

    Read the article

  • Why are my attempts to open a file using open for writing failing? Ada 95

    - by mat_geek
    When I attempt to open a file to write to I get an Ada.IO_Exceptions.Name_Error. The procedure call is Ada.Text_IO.Open The file name is "C:\CC_TEST_LOG.TXT". This file does not exist. This is on Windows XP on an NTFS partition. The user has permissions to create and write to the directory. The filename is well under the WIN32 max path length. name_2 : String := "C:\CC_TEST_LOG.TXT" if name_2'last > name_2'first then begin Ada.Text_IO.Open(file, Ada.Text_IO.Out_File, name_2); Ada.Text_IO.Put_Line( "CC_Test_Utils: LogFile: ERROR: Open, File " & name_2); return; exception when The_Error : others => Ada.Text_IO.Put_Line( "CC_Test_Utils: LogFile: ERROR: Open Failed; " & Ada.Exceptions.Exception_Name(The_Error) & ", File " & name_2); end; end if;

    Read the article

  • Java split xml file

    - by CC
    Hi all, I'm working on a piece of code to split files. I want to split flat file (that's ok, it is working fine) and xml file. The idea is to split based of a number of files to split: I have a file, and I want to split it in x files (x is a parameters). I'm doing the split by taking the size of the file and spliting the size by the number of files to split. Then, mysolution was to use a BufferedReader and to use it like while ((n = reader.read(buffer, 0, buffer.length)) != -1) { { The main problem is that for the xml file I cannot just split it, but I have to split it based on a block delimited by a start xml tag and end xml tag: <start tag> bla bla xml stuff </end tag> So I cannot cut a block at the middle. So if when I'm at the half of a block, is the size of my new file is greater than my max, I will have to read until the end of the tag, and then, to start a next file. The problem is that I have all sort of cases, and is a bit difficult to search the end tag. - the block reads a text until the middle of the end tag - the block reads a text until the end of the end tag, and no more other caracter after - etc and in the same time to have a loop and read the next block. Some times the end of a block concatenated with the start of the next one, I have the end xml tag. I hope you get the idea. My question is, does anyone have some algorithm that does that more accurate and who i treating all special cases ? The idea is to split the file as quickly as possible. Thanks alot.

    Read the article

  • Database design for sharing photos site?

    - by javaLearner.java
    I am using php and mysql. If I want to do a sharing photos website, whats the best database design for upload and display photos. This is what I have in mind: domain: |_> photos |_> user Logged in user will upload photo in [http://www.domain.com/user/upload.php] The photos are stored in filesystems, and the path-to-photos stored in database. So, in my photos folder would be like: photos/userA/subfolders+photos, photos/userB/subfolders+photos, photos/userC/subfolders+photos etc Public/others people may view his photo in: [http://www.domain.com/photos/user/?photoid=123] where 123 is the photoid, from there, I will query from database to fetch the path and display the image. My questions: Whats the best database design for photo-sharing website (like flickr)? Will there be any problems if I keep creating new folder in "photos" folder. What if hundreds of thousands users registered? Whats the best practices What size of photos should I keep? Currently I only stored thumbnail (100x100) and (max) 1600x1200 px photos. What others things I should take note when developing photos-sharing website?

    Read the article

  • Figuring out the Nyquist performance limitation of an ADC on an example PIC microcontroller

    - by AKE
    I'm spec-ing the suitability of a dsPIC microcontroller for an analog-to-digital application. This would be preferable to using dedicated A/D chips and a separate dedicated DSP chip. To do that, I've had to run through some computations, pulling the relevant parameters from the datasheets. I'm not sure I've got it right -- would appreciate a check! (EDITED NOTE: The PIC10F220 in the example below was selected ONLY to walk through a simple example to check that I'm interpreting Tacq, Fosc, TAD, and divisor correctly in working through this sort of Nyquist analysis. The actual chips I'm considering for the design are the dsPIC33FJ128MC804 (with 16b A/D) or dsPIC30F3014 (with 12b A/D).) A simple example: PIC10F220 is the simplest possible PIC with an ADC Runs at clock speed of 8MHz. Has an instruction cycle of 0.5us (4 clock steps per instruction) So: Taking Tacq = 6.06 us (acquisition time for ADC, assume chip temp. = 50*C) [datasheet p34] Taking Fosc = 8MHz (? clock speed) Taking divisor = 4 (4 clock steps per CPU instruction) This gives TAD = 0.5us (TAD = 1/(Fosc/divisor) ) Conversion time is 13*TAD [datasheet p31] This gives conversion time 6.5us ADC duration is then 12.56 us [? Tacq + 13*TAD] Assuming at least 2 instructions for load/store: This is another 1 us [0.5 us per instruction] Which would give max sampling rate of 73.7 ksps (1/13.56) Supposing 8 more instructions for real-time processing: This is another 4 us Thus, total ADC/handling time = 17.56us (12.56us + 1us + 4us) So expected upper sampling rate is 56.9 ksps. Nyquist frequency for this sampling rate is therefore 28 kHz. If this is right, it suggests the (theoretical) performance suitability of this chip's A/D is for signals that are bandlimited to 28 kHz. Is this a correct interpretation of the information given in the data sheet in obtaining the Nyquist performance limit? Any opinions on the noise susceptibility of ADCs in PIC / dsPIC chips would be much appreciated! AKE

    Read the article

  • How do I cast <T> to varbinary and be still be able to perform a CONVERT on the sql side? Implicatio

    - by Biff MaGriff
    Hello, I'm writing this application that will allow a user to define custom quizzes and then allow another user to respond to the questions. Each question of the quiz has a corresponding datatype. All the responses to all the questions are stored vertically in my [Response] table. I currently use 2 fields to store the response. //Response schema ResponseID int QuizPersonID int QuestionID int ChoiceID int //maps to Choice table, used for drop down lists ChoiceValue varbinary(MAX) //used to store a user entered value I'm using .net 3.5 C# SQL Server 2008. I'm thinking that I would want to store different datatypes in the same field and then in my SQL report proc I would CONVERT to the proper datatype. I'm thinking this is ideal because I only have to check one field. I'm also thinking it might be more trouble than it is worth. I think my other options are to; store the data as strings in the db (yuck), or to have a column for each datatype I might use. So what I would like to know is, how would I format my datatypes in C# so that they can be converted properly in SQL? What is the performance hit for converting in SQL? Should I just make a whole wack of columns for each datatype?

    Read the article

  • Can pdflatex (or any tex package) automatically rescale included images which have been reduced in s

    - by drfrogsplat
    I'm writing my thesis in LaTeX, generating it with pdflatex. I have a large number of figures, many of which are bitmaps (as opposed to SVG) in PNG/JPEG format. I've generally created them to be fairly high resolution (say 1600x1200-ish) to ensure that whatever size they end up in the document, they'll be at least 300dpi when printed. As I'm writing/laying out the document, I'm including graphics (using \includegraphics from the graphicx package) and setting widths/heights as appropriate (e.g. subfigures are quite small). I don't need the images to be any more than about 300 dpi at best, so where I have shrunk a 1600x1200 image down to say 5cm, the image is now at 800 dpi. So despite including some very small (on the page) images, the PDF is becoming quite large. Is there a way to tell pdflatex or graphicx (or something else involved?) to convert all images to a maximum of 300 dpi, based on the dimensions I'm setting with say \includegraphics[width=2in]{filename}? i.e. so it scales the image to a max of 600x600 pixels as it includes it in the PDF (leaving the original file untouched). I know I can resize the original images with various command line applications, and include the pre-resized versions, but given the images vary in size considerably, it wouldn't be as simple as making sure they're all 300dpi for a constant printed size. It'd also be nice to be able to easily create different versions of PDFs (web vs final print) without resizing images manually, so that the 'web' PDF capped images at say 72-100 dpi while the final print one could cap at 600 (if at all).

    Read the article

  • LINQ - How to query a range of effective dates that only has start dates

    - by itchi
    I'm using C# 3.5 and EntityFramework. I have a list of items in the database that contain interest rates. Unfortunately this list only contains the Effective Start Date. I need to query this list for all items within a range. However, I can't see a way to do this without querying the database twice. (Although I'm wondering if delayed execution with EntityFramework is making only one call.) Regardless, I'm wondering if I can do this without using my context twice. internal IQueryable<Interest> GetInterests(DateTime startDate, DateTime endDate) { var FirstDate = Context.All().Where(x => x.START_DATE < startDate).Max(x => x.START_DATE); IQueryable<Interest> listOfItems = Context.All().Where(x => x.START_DATE >= FirstDate && x.START_DATE <= endDate); return listOfItems; }

    Read the article

  • Beginner SQL question: querying gold and silver tag badges in Stack Exchange Data Explorer

    - by polygenelubricants
    I'm using the Stack Exchange Data Explorer to learn SQL, but I think the fundamentals of the question is applicable to other databases. I'm trying to query the Badges table, which according to Stexdex (that's what I'm going to call it from now on) has the following schema: Badges Id UserId Name Date This works well for badges like [Epic] and [Legendary] which have unique names, but the silver and gold tag-specific badges seems to be mixed in together by having the same exact name. Here's an example query I wrote for [mysql] tag: SELECT UserId as [User Link], Date FROM Badges Where Name = 'mysql' Order By Date ASC The (slightly annotated) output is: as seen on stexdex: User Link Date --------------- ------------------- // all for silver except where noted Bill Karwin 2009-02-20 11:00:25 Quassnoi 2009-06-01 10:00:16 Greg 2009-10-22 10:00:25 Quassnoi 2009-10-31 10:00:24 // for gold Bill Karwin 2009-11-23 11:00:30 // for gold cletus 2010-01-01 11:00:23 OMG Ponies 2010-01-03 11:00:48 Pascal MARTIN 2010-02-17 11:00:29 Mark Byers 2010-04-07 10:00:35 Daniel Vassallo 2010-05-14 10:00:38 This is consistent with the current list of silver and gold earners at the moment of this writing, but to speak in more timeless terms, as of the end of May 2010 only 2 users have earned the gold [mysql] tag: Quassnoi and Bill Karwin, as evidenced in the above result by their names being the only ones that appear twice. So this is the way I understand it: The first time an Id appears (in chronological order) is for the silver badge The second time is for the gold Now, the above result mixes the silver and gold entries together. My questions are: Is this a typical design, or are there much friendlier schema/normalization/whatever you call it? In the current design, how would you query the silver and gold badges separately? GROUP BY Id and picking the min/max or first/second by the Date somehow? How can you write a query that lists all the silver badges first then all the gold badges next? Imagine also that the "real" query may be more complicated, i.e. not just listing by date. How would you write it so that it doesn't have too many repetition between the silver and gold subqueries? Is it perhaps more typical to do two totally separate queries instead? What is this idiom called? A row "partitioning" query to put them into "buckets" or something?

    Read the article

  • forward slash problem in xsl ams xsql

    - by Peter Kaleta
    Hi I have simple xsql <?xml version="1.0" encoding="utf-8"?> <?xml-stylesheet type="text/xsl" href="zad1.xsl" ?> <page xmlns:xsql="urn:oracle-xsql" connection="java:comp/env/jdbc/mondialDS"> <xsql:query max-rows="-1" null-indicator="no" tag-case="lower" rowset-element="continents"> select name as continent from mondial_user.Continent order by 1 </xsql:query> </page> which gives me a list of continents with "australia/oceania" among them i use XSL on above xsql : <?xml version="1.0" encoding="UTF-8" ?> <xsl:stylesheet version="2.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <!-- Root template --> <res> <xsl:template match="/continents"> <xsl:for-each select="row"> <re> <xsl:value-of select="continent"/> </re> </xsl:for-each> </xsl:template> </res> </xsl:stylesheet> ANd firefox throws error : on wrong formated xml document with : AfricaAmericaAsiaAustralia/OceaniaEurope -----------------------------------^ Help apreciated

    Read the article

  • When sending headers to download a PDF, Safari appends .html

    - by alex
    Here is the request and response headers http://www.example.com/get/pdf GET /~get/pdf HTTP/1.1 Host: www.example.com User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive Referer: http://www.example.com Cookie: etc HTTP/1.1 200 OK Date: Thu, 29 Apr 2010 02:20:43 GMT Server: Apache/2.2.14 (Unix) mod_ssl/2.2.14 OpenSSL/0.9.8i DAV/2 mod_auth_passthrough/2.1 mod_bwlimited/1.4 FrontPage/5.0.2.2635 X-Powered-By: Me Expires: Thu, 19 Nov 1981 08:52:00 GMT Pragma: no-cache Cache-Control: private Content-Disposition: attachment; filename="File #1.pdf" Content-Length: 18776 Keep-Alive: timeout=5, max=100 Connection: Keep-Alive Content-Type: text/html; charset=utf-8 ---------------------------------------------------------- Basically, the response headers are sent by DOMPDF's stream() method. In Firefox, the file is prompted as File #1.pdf. However, in Safari, the file is saved as File #1.pdf.html. Does anyone know why Safari is appending the html extension to the filename?

    Read the article

< Previous Page | 164 165 166 167 168 169 170 171 172 173 174 175  | Next Page >