Search Results

Search found 11358 results on 455 pages for 'utf 16'.

Page 252/455 | < Previous Page | 248 249 250 251 252 253 254 255 256 257 258 259  | Next Page >

  • SQL - Count grouped entries and then get the max values grouped by date

    - by Marcus
    hello, I am out of any logic how to write the right sql statment. I've got a sqlite table holding every played track in a row with played date/time Now I will count the plays of all artists, grouped by day and then find the artist with the max playcount per day. I used this Query SELECT COUNT(ARTISTID) AS artistcount, ARTIST AS artistname,strftime('%Y-%m-%d', playtime) AS day_played FROM playcount GROUP BY artistname to get this result "93"|"The Skygreen Leopards"|"2010-06-16" "2" |"Arcade Fire" |"2010-06-15" "2" |"Dead Kennedys" |"2010-06-15" "2" |"Wolf People" |"2010-06-15" "3" |"16 Horsepower" |"2010-06-15" "3" |"Alela Diane" |"2010-06-15" "46"|"Motorama" |"2010-06-15" "1" |"Ariel Pink's Haunted" |"2010-06-14" I tried then to query this virtual table but I always get false results in artistname. SELECT MAX(artistcount), artistname , day_played FROM ( SELECT COUNT(ARTISTID) AS artistcount, ARTIST AS artistname,strftime('%Y-%m-%d', playtime) AS day_played FROM playcount GROUP BY artistname ) GROUP BY strftime('%Y-%m-%d',day_played) result in this "93"|"lilium" |"2010-06-16" "46"|"Wolf People"|"2010-06-15" "30"|"of Montreal"|"2010-06-14" but the artist name is false. I think through the grouping by day, it just use the last artist, or so. I tested stuff like INNER JOIN or GROUP BY ... HAVING in trial and error, I read examples of similar issues but always get lost in columnnames and stuff (I am a bit burned out) I hope someone can give me a hint. thanks m

    Read the article

  • Problem sorting RSS feed by date using XSL

    - by Buckers
    I'm creating a website where I need to show the top 5 records from an RSS feed, and these need to be sorted by date and time. The date fields in the RSS feed are in the following format: "Mon, 16 Feb 2009 16:02:44 GMT" I'm having big problems getting the records to sort correctly - I've tried lots of different code examples I've seen, but none seem to sort the records correctly. The code for my XSL sheet is shown below, and the feed in question is here. Very grateful for anyones help!!! Thanks, Chris. XSL CODE: <xsl:stylesheet version="1.1" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:digg="http://digg.com//docs/diggrss/" xmlns:dc="http://purl.org/dc/elements/1.1/"> <xsl:template match="/"> <xsl:for-each select="//*[local-name()='item'][position() < 6]"> <p> <a> <xsl:attribute name="href"> <xsl:value-of select="*[local-name()='link']"/></xsl:attribute> <xsl:attribute name="target"> <xsl:text>top</xsl:text> </xsl:attribute> <xsl:value-of select="*[local-name()='title']"/> </a> <br/> <span class="smaller"><xsl:value-of select="*[local-name()='pubDate']" disable-output-escaping="yes"/></span> </p> </xsl:for-each>

    Read the article

  • 3x3 Sobel operator and gradient features

    - by pithyless
    Reading a paper, I'm having difficulty understanding the algorithm described: Given a black and white digital image of a handwriting sample, cut out a single character to analyze. Since this can be any size, the algorithm needs to take this into account (if it will be easier, we can assume the size is 2^n x 2^m). Now, the description states given this image we will convert it to a 512-bit feature (a 512-bit hash) as follows: (192 bits) computes the gradient of the image by convolving it with a 3x3 Sobel operator. The direction of the gradient at every edge is quantized to 12 directions. (192 bits) The structural feature generator takes the gradient map and looks in a neighborhood for certain combinations of gradient values. (used to compute 8 distinct features that represent lines and corners in the image) (128 bits) Concavity generator uses an 8-point star operator to find coarse concavities in 4 directions, holes, and lagrge-scale strokes. The image feature maps are normalized with a 4x4 grid. I'm for now struggling with how to take an arbitrary image, split into 16 sections, and using a 3x3 Sobel operator to come up with 12 bits for each section. (But if you have some insight into the other parts, feel free to comment :)

    Read the article

  • JAVA image transfer problem

    - by user579098
    Hi, I have a school assignment, to send a jpg image,split it into groups of 100 bytes, corrupt it, use a CRC check to locate the errors and re-transmit until it eventually is built back into its original form. It's practically ready, however when I check out the new images, they appear with errors.. I would really appreciate if someone could look at my code below and maybe locate this logical mistake as I can't understand what the problem is because everything looks ok :S For the file with all the data needed including photos and error patterns one could download it from this link:http://rapidshare.com/#!download|932tl2|443122762|Data.zip|739 Thanks in advance, Stefan p.s dont forget to change the paths in the code for the image and error files package networks; import java.io.*; // for file reader import java.util.zip.CRC32; // CRC32 IEEE (Ethernet) public class Main { /** * Reads a whole file into an array of bytes. * @param file The file in question. * @return Array of bytes containing file data. * @throws IOException Message contains why it failed. */ public static byte[] readFileArray(File file) throws IOException { InputStream is = new FileInputStream(file); byte[] data=new byte[(int)file.length()]; is.read(data); is.close(); return data; } /** * Writes (or overwrites if exists) a file with data from an array of bytes. * @param file The file in question. * @param data Array of bytes containing the new file data. * @throws IOException Message contains why it failed. */ public static void writeFileArray(File file, byte[] data) throws IOException { OutputStream os = new FileOutputStream(file,false); os.write(data); os.close(); } /** * Converts a long value to an array of bytes. * @param data The target variable. * @return Byte array conversion of data. * @see http://www.daniweb.com/code/snippet216874.html */ public static byte[] toByta(long data) { return new byte[] { (byte)((data >> 56) & 0xff), (byte)((data >> 48) & 0xff), (byte)((data >> 40) & 0xff), (byte)((data >> 32) & 0xff), (byte)((data >> 24) & 0xff), (byte)((data >> 16) & 0xff), (byte)((data >> 8) & 0xff), (byte)((data >> 0) & 0xff), }; } /** * Converts a an array of bytes to long value. * @param data The target variable. * @return Long value conversion of data. * @see http://www.daniweb.com/code/snippet216874.html */ public static long toLong(byte[] data) { if (data == null || data.length != 8) return 0x0; return (long)( // (Below) convert to longs before shift because digits // are lost with ints beyond the 32-bit limit (long)(0xff & data[0]) << 56 | (long)(0xff & data[1]) << 48 | (long)(0xff & data[2]) << 40 | (long)(0xff & data[3]) << 32 | (long)(0xff & data[4]) << 24 | (long)(0xff & data[5]) << 16 | (long)(0xff & data[6]) << 8 | (long)(0xff & data[7]) << 0 ); } public static byte[] nextNoise(){ byte[] result=new byte[100]; // copy a frame's worth of data (or remaining data if it is less than frame length) int read=Math.min(err_data.length-err_pstn, 100); System.arraycopy(err_data, err_pstn, result, 0, read); // if read data is less than frame length, reset position and add remaining data if(read<100){ err_pstn=100-read; System.arraycopy(err_data, 0, result, read, err_pstn); }else // otherwise, increase position err_pstn+=100; // return noise segment return result; } /** * Given some original data, it is purposefully corrupted according to a * second data array (which is read from a file). In pseudocode: * corrupt = original xor corruptor * @param data The original data. * @return The new (corrupted) data. */ public static byte[] corruptData(byte[] data){ // get the next noise sequence byte[] noise = nextNoise(); // finally, xor data with noise and return result for(int i=0; i<100; i++)data[i]^=noise[i]; return data; } /** * Given an array of data, a packet is created. In pseudocode: * frame = corrupt(data) + crc(data) * @param data The original frame data. * @return The resulting frame data. */ public static byte[] buildFrame(byte[] data){ // pack = [data]+crc32([data]) byte[] hash = new byte[8]; // calculate crc32 of data and copy it to byte array CRC32 crc = new CRC32(); crc.update(data); hash=toByta(crc.getValue()); // create a byte array holding the final packet byte[] pack = new byte[data.length+hash.length]; // create the corrupted data byte[] crpt = new byte[data.length]; crpt = corruptData(data); // copy corrupted data into pack System.arraycopy(crpt, 0, pack, 0, crpt.length); // copy hash into pack System.arraycopy(hash, 0, pack, data.length, hash.length); // return pack return pack; } /** * Verifies frame contents. * @param frame The frame data (data+crc32). * @return True if frame is valid, false otherwise. */ public static boolean verifyFrame(byte[] frame){ // allocate hash and data variables byte[] hash=new byte[8]; byte[] data=new byte[frame.length-hash.length]; // read frame into hash and data variables System.arraycopy(frame, frame.length-hash.length, hash, 0, hash.length); System.arraycopy(frame, 0, data, 0, frame.length-hash.length); // get crc32 of data CRC32 crc = new CRC32(); crc.update(data); // compare crc32 of data with crc32 of frame return crc.getValue()==toLong(hash); } /** * Transfers a file through a channel in frames and reconstructs it into a new file. * @param jpg_file File name of target file to transfer. * @param err_file The channel noise file used to simulate corruption. * @param out_file The name of the newly-created file. * @throws IOException */ public static void transferFile(String jpg_file, String err_file, String out_file) throws IOException { // read file data into global variables jpg_data = readFileArray(new File(jpg_file)); err_data = readFileArray(new File(err_file)); err_pstn = 0; // variable that will hold the final (transfered) data byte[] out_data = new byte[jpg_data.length]; // holds the current frame data byte[] frame_orig = new byte[100]; byte[] frame_sent = new byte[100]; // send file in chunks (frames) of 100 bytes for(int i=0; i<Math.ceil(jpg_data.length/100); i++){ // copy jpg data into frame and init first-time switch System.arraycopy(jpg_data, i*100, frame_orig, 0, 100); boolean not_first=false; System.out.print("Packet #"+i+": "); // repeat getting same frame until frame crc matches with frame content do { if(not_first)System.out.print("F"); frame_sent=buildFrame(frame_orig); not_first=true; }while(!verifyFrame(frame_sent)); // usually, you'd constrain this by time to prevent infinite loops (in // case the channel is so wacked up it doesn't get a single packet right) // copy frame to image file System.out.println("S"); System.arraycopy(frame_sent, 0, out_data, i*100, 100); } System.out.println("\nDone."); writeFileArray(new File(out_file),out_data); } // global variables for file data and pointer public static byte[] jpg_data; public static byte[] err_data; public static int err_pstn=0; public static void main(String[] args) throws IOException { // list of jpg files String[] jpg_file={ "C:\\Users\\Stefan\\Desktop\\Data\\Images\\photo1.jpg", "C:\\Users\\Stefan\\Desktop\\Data\\Images\\photo2.jpg", "C:\\Users\\Stefan\\Desktop\\Data\\Images\\photo3.jpg", "C:\\Users\\Stefan\\Desktop\\Data\\Images\\photo4.jpg" }; // list of error patterns String[] err_file={ "C:\\Users\\Stefan\\Desktop\\Data\\Error Pattern\\Error Pattern 1.DAT", "C:\\Users\\Stefan\\Desktop\\Data\\Error Pattern\\Error Pattern 2.DAT", "C:\\Users\\Stefan\\Desktop\\Data\\Error Pattern\\Error Pattern 3.DAT", "C:\\Users\\Stefan\\Desktop\\Data\\Error Pattern\\Error Pattern 4.DAT" }; // loop through all jpg/channel combinations and run tests for(int x=0; x<jpg_file.length; x++){ for(int y=0; y<err_file.length; y++){ System.out.println("Transfering photo"+(x+1)+".jpg using Pattern "+(y+1)+"..."); transferFile(jpg_file[x],err_file[y],jpg_file[x].replace("photo","CH#"+y+"_photo")); } } } }

    Read the article

  • Public key of Android project and keystore created in Eclipse?

    - by user578056
    I created an Android project using Eclipse (under Windows FWIW) and let Eclipse create the keypair during the Export Android Application process. I successfully used Eclipse to make a signed release build that is now on the Market. Now I want to now use ProGuard, which I believe means using Ant instead of Eclipse to build. It was a pain, but Ant building works in both debug and release, until it tries to sign the APK. I get: [signjar] jarsigner: Certificate chain not found for: redacted. redacted must reference a valid KeyStore key entry containing a private key and corresponding public key certificate chain. keytool -list -keystore redacted gives me: Keystore type: JKS Keystore provider: SUN Your keystore contains 1 entry redacted, Jan 16, 2011, PrivateKeyEntry, Certificate fingerprint (MD5): BD:0F:70:C1:39:F5:FE:5B:BC:CD:89:0B:C8:66:95:E0 Which brings me to the actual question: where is my public key? I have some sort of public key on my Android Market profile, but is that the pair for my private key? If so, how do I store that in the keystore so that jarsigner will work?

    Read the article

  • Can't get any speedup from parallelizing Quicksort using Pthreads

    - by Murat Ayfer
    I'm using Pthreads to create a new tread for each partition after the list is split into the right and left halves (less than and greater than the pivot). I do this recursively until I reach the maximum number of allowed threads. When I use printfs to follow what goes on in the program, I clearly see that each thread is doing its delegated work in parallel. However using a single process is always the fastest. As soon as I try to use more threads, the time it takes to finish almost doubles, and keeps increasing with number of threads. I am allowed to use up to 16 processors on the server I am running it on. The algorithm goes like this: Split array into right and left by comparing the elements to the pivot. Start a new thread for the right and left, and wait until the threads join back. If there are more available threads, they can create more recursively. Each thread waits for its children to join. Everything makes sense to me, and sorting works perfectly well, but more threads makes it slow down immensely. I tried setting a minimum number of elements per partition for a thread to be started (e.g. 50000). I tried an approach where when a thread is done, it allows another thread to be started, which leads to hundreds of threads starting and finishing throughout. I think the overhead was way too much. So I got rid of that, and if a thread was done executing, no new thread was created. I got a little more speedup but still a lot slower than a single process. The code I used is below. http://pastebin.com/UaGsjcq2 Does anybody have any clue as to what I could be doing wrong?

    Read the article

  • Oracle JDBC intermittent Connection Issue

    - by Lipska
    I am experiencing a very strange problem This is a very simple use of JDBC connecting to an Oracle database OS: Ubuntu Java Version: 1.5.0_16-b02 1.6.0_17-b04 Database: Oracle 11g Release 11.1.0.6.0 When I make use of the jar file JODBC14.jar it connects to the database everytime When I make use of the jar file JODBC5.jar it connects some times and other times it throws an error ( shown below) If I recompile with Java 6 and use JODBC6.jar I get the same results as JODBC5.jar I need specific features in JODB5.jar that are not available in JODBC14.jar Any ideas Error Connecting to oracle java.sql.SQLException: Io exception: Connection reset at oracle.jdbc.driver.SQLStateMapping.newSQLException(SQLStateMapping.java:74) at oracle.jdbc.driver.DatabaseError.newSQLException(DatabaseError.java:110) at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:171) at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:227) at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:494) at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:411) at oracle.jdbc.driver.PhysicalConnection.(PhysicalConnection.java:490) at oracle.jdbc.driver.T4CConnection.(T4CConnection.java:202) at oracle.jdbc.driver.T4CDriverExtension.getConnection(T4CDriverExtension.java:33) at oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:474) at java.sql.DriverManager.getConnection(DriverManager.java:525) at java.sql.DriverManager.getConnection(DriverManager.java:171) at TestConnect.main(TestConnect.java:13) Code Below is the code I am using import java.io.; import java.sql.; public class TestConnect { public static void main(String[] args) { try { System.out.println("Connecting to oracle"); Connection con=null; Class.forName("oracle.jdbc.driver.OracleDriver"); con=DriverManager.getConnection( "jdbc:oracle:thin:@172.16.48.100:1535:sample", "JOHN", "90009000"); System.out.println("Connected to oracle"); con.close(); System.out.println("Goodbye"); } catch(Exception e){e.printStackTrace();} } }

    Read the article

  • Project Euler Question 14 (Collatz Problem)

    - by paradox
    The following iterative sequence is defined for the set of positive integers: n -n/2 (n is even) n -3n + 1 (n is odd) Using the rule above and starting with 13, we generate the following sequence: 13 40 20 10 5 16 8 4 2 1 It can be seen that this sequence (starting at 13 and finishing at 1) contains 10 terms. Although it has not been proved yet (Collatz Problem), it is thought that all starting numbers finish at 1. Which starting number, under one million, produces the longest chain? NOTE: Once the chain starts the terms are allowed to go above one million. I tried coding a solution to this in C using the bruteforce method. However, it seems that my program stalls when trying to calculate 113383. Please advise :) #include <stdio.h> #define LIMIT 1000000 int iteration(int value) { if(value%2==0) return (value/2); else return (3*value+1); } int count_iterations(int value) { int count=1; //printf("%d\n", value); while(value!=1) { value=iteration(value); //printf("%d\n", value); count++; } return count; } int main() { int iteration_count=0, max=0; int i,count; for (i=1; i<LIMIT; i++) { printf("Current iteration : %d\n", i); iteration_count=count_iterations(i); if (iteration_count>max) { max=iteration_count; count=i; } } //iteration_count=count_iterations(113383); printf("Count = %d\ni = %d\n",max,count); }

    Read the article

  • toggling proximity sensor on iPhone loses an event

    - by slugolicious
    I'm using setProximitySensingEnabled and implemented proximityStateChanged in my UIApplication subclass. It looks like if sensing is toggled, that the first "off" event is being lost. My UIApplication class is pretty basic... -(void)proximityStateChanged:(BOOL)state { NSLog(state ? @"ON" : @"OFF"); } In my application delegate, I have a UISwitch that enables/disables the proximity sensor. -(IBAction)toggleProxy:(id)sender { [UIApplication sharedApplication].proximitySensingEnabled = prox.on; } "prox" is my UISwitch. The test works fine when it first starts. I tap the switch to turn it on and then put my hand over the sensor for a second then move it away and get: 2009-03-11 12:43:00.465 Proximity[324:20b] ON 2009-03-11 12:43:02.514 Proximity[324:20b] OFF 2009-03-11 12:43:04.046 Proximity[324:20b] ON 2009-03-11 12:43:05.621 Proximity[324:20b] OFF I then tap the switch to turn it off then tap again to turn it on. Now I get: 2009-03-11 12:43:12.005 Proximity[324:20b] ON 2009-03-11 12:43:14.789 Proximity[324:20b] ON 2009-03-11 12:43:16.467 Proximity[324:20b] OFF 2009-03-11 12:43:17.516 Proximity[324:20b] ON 2009-03-11 12:43:19.077 Proximity[324:20b] OFF Notice I get two ON's before an OFF. The OFF is lost somewhere. I can't replicate this behavior using Google's mobile app so I'm wondering if they're resetting something in between proximity enabling. They don't have the proximity sensor on all the time because if you cover the sensor, the screen doesn't go blank. You have to tilt the phone up and angle it back (to simulate the position it would be in at your ear) and then covering the sensor works. Anyone else playing with the sensor? In my particular app, I'm recording a voice message and when you move the phone away from your ear, I want to pause the recording (when I get an OFF). The first time I move the phone away from my ear, the recording is not paused. However, if I put it to my ear and move it away again, it is paused.

    Read the article

  • How to set up RPX widget and facebook app to be able to authenticate with rpx_now?

    - by Andrei
    Using the sample app for rpx_now gem ( http://github.com/grosser/rpx_now_example) on localhost:3000, I have successfully logged in via Google Accounts, myOpenID, Yahoo, but cannot make it via Facebook. In the RPX app/widget settings I have set my facebook-app key and secret. In my facebook app settings, the Connect URL is myappname.rpxnow.com. But when I try to connect, then I don't even see a facebook login page, just a number of redirects and I am back to my localhost with the following exception http://gist.github.com/386520 . Before I was successfully connecting with oauth2 gem, however, without fetching user data - only authentication. That time I set only key/secret and localhost as my Connect URL. Currently, I don't even ask for email etc., but still the same problem. Can it happen because rpx_now cannot get requested user data from facebook? Or it is a problem of facebook key/secret? May be I need to provide more settings of my facebook app? RPXNow::ApiError in UsersController#create Got error: Invalid parameter: token (code: 1), HTTP status: 200 RAILS_ROOT: /home/Andrei/rpx_now_example Application Trace | Framework Trace | Full Trace /usr/lib/ruby/gems/1.8/gems/rpx_now-0.6.20/lib/rpx_now/api.rb:71:in `parse_response' /usr/lib/ruby/gems/1.8/gems/rpx_now-0.6.20/lib/rpx_now/api.rb:21:in `call' /usr/lib/ruby/gems/1.8/gems/rpx_now-0.6.20/lib/rpx_now.rb:23:in `user_data' /home/Andrei/rpx_now_example/app/controllers/users_controller.rb:16:in `create' Request Parameters: None Show session dump Response Headers: {"Content-Type"="", "Cache-Control"="no-cache"}

    Read the article

  • Thinking sphinx: Problems with polymorphic associations

    - by auralbee
    Hello, I recently switched from ultrasphinx to thinking_sphinx for full-text search. I am trying to figure out how to index fields of polymorphic associations. I found some information but I still have some problems, although it seems to be easy. Here is my setup: class Info < ActiveRecord::Base belongs_to :mappable, :polymorphic => true define_index indexes mappable_type indexes mappable(:name), :as => :mappable_name end class A < ActiveRecord::Base has_many :infos, :as => :mappable end class B < ActiveRecord::Base has_many :infos, :as => :mappable end Amongst others, I want to do a search in the name column of A and B (both classes have this column), so I added the field to my index. When I do rake thinking_sphinx:index I get the following error: Generating Configuration to .../config/development.sphinx.conf rake aborted! undefined method `connection' for Object:Class .../.gem/ruby/1.8/gems/thinking-sphinx-1.3.16/lib/thinking_sphinx/ association.rb:149:in `casted_options' Any idea? Am I missing something? Thanks in advance. Tobi

    Read the article

  • Shift count negative or too big error - correct solution?

    - by PeterK
    I have the following function for reading a big-endian quadword (in a abstract base file I/O class): unsigned long long CGenFile::readBEq(){ unsigned long long qT = 0; qT |= readb() << 56; qT |= readb() << 48; qT |= readb() << 40; qT |= readb() << 32; qT |= readb() << 24; qT |= readb() << 16; qT |= readb() << 8; qT |= readb() << 0; return qT; } The readb() functions reads a BYTE. Here are the typedefs used: typedef unsigned char BYTE; typedef unsigned short WORD; typedef unsigned long DWORD; The thing is that i get 4 compiler warnings on the first four lines with the shift operation: warning C4293: '<<' : shift count negative or too big, undefined behavior I understand why this warning occurs, but i can't seem to figure out how to get rid of it correctly. I could do something like: qT |= (unsigned long long)readb() << 56; This removes the warning, but isn't there any other problem, will the BYTE be correctly extended all the time? Maybe i'm just thinking about it too much and the solution is that simple. Can you guys help me out here? Thanks.

    Read the article

  • Zend framework "invalid controller" error message

    - by stef
    When I load the homepage of a ZF site I get a message saying "Invalid controller specified (error)" where "error" seems to be the name of the controller. In my bootstrap.php I have the snippet below where I added the prints statement in the catch(): // Dispatch the request using the front controller. try { $frontController->dispatch(); } catch (Exception $exception) { print $exception; exit; exit($exception->getMessage()); } This prints out: exception 'Zend_Controller_Dispatcher_Exception' with message 'Invalid controller specified (error)' in /www/common/ZendFramework/library16/Zend/Controller/Dispatcher/Standard.php:241 Stack trace: #0 /www/common/ZendFramework/library16/Zend/Controller/Front.php(934): Zend_Controller_Dispatcher_Standard->dispatch(Object(Zend_Controller_Request_Http), Object(Zend_Controller_Response_Http)) #1 /www/site.com/htdocs/application/bootstrap.php(38): Zend_Controller_Front->dispatch() #2 /www/site.com/htdocs/public/index.php(16): require('/www/site....') #3 {main} Can anyone make sense out of what it going on? I have a hunch it has something to do with case sensitivity and the naming conventions of ZF. The initial "invalid controller" message comes from the snippet below print $request->getControllerName(); if (!$this->isDispatchable($request)) { $controller = $request->getControllerName(); if (!$this->getParam('useDefaultControllerAlways') && !empty($controller)) { require_once 'Zend/Controller/Dispatcher/Exception.php'; throw new Zend_Controller_Dispatcher_Exception('Invalid controller specified (' . $request->getControllerName() . ')'); } $className = $this->getDefaultControllerClass($request); } This all prints out "indexerrorInvalid controller specified (error)" so it looks like it's trying to load the index controller first, can not and then has a problem loading the error controller. Could it just be that the path to the controller files is wrong?

    Read the article

  • Select a distinct record, filtering is not working..

    - by help_inmssql
    Hello EVery I am new to SQl. query to result in the following records. I have a table with records as c1 c2 c3 c4 c5 c6 1 John 2.3.2010 12:09:54 4 7 99 2 mike 2.3.2010 13:09:59 8 6 88 3 ahmad 2.3.2010 13:09:59 1 9 19 4 Jim 23.3.2010 16:35:14 4 5 99 5 run 23.3.2010 12:09:54 3 8 12 I want to fecth only records. i.e only 1 latest record per day. If both of them happen at the same time, sort by C1.so in 1&3 it should fetch 3. 3 ahmad 2.3.2010 14:09:59 1 9 19 4 Jim 23.3.2010 16:35:14 4 5 99 I have got a new problem in this. If i filter the records based on conditions the last record is missing. I tried many ways but still it is failing. Here update_log is my table. SELECT * FROM update_log t1 WHERE (t1.c3) = ( SELECT MAX(t2.c3) FROM update_log t2 WHERE DATEDIFF(dd,t2.c3, t1.c3) = 0 ) and t1.c3 > '02.03.2010' and t1.modified_at <= '22.03.2010' ORDER BY t1.c3 ASC. But i am not able to retrieve the record 4 Jim 23.3.2010 16:35:14 4 5 99 I dont know this query results in only 3 ahmad 2.3.2010 14:09:59 1 9 19 The format of the column c3 is datetime. I am pumping the data into the column as using $date = date("d.m.Y H:i",time()); -- simple date fetch of today. Another query that i tried for the same purpose. select * from (select convert(varchar(10), c3,104) as date, max(c3) as max_date, max(c1) as Nr from update_log group by convert(varchar(10), c3,104)) as t2 inner join update_log as t1 on (t2.max_date = t1.c3 and convert(varchar(10), c3,104) = date and t1.[c1]= Nr) WHERE t1.c3 >= '02.03.2010' and t1.c3 <= '16.04.2010' . I even tried this way..the same error last record is not coming..

    Read the article

  • How many files in a directory is too many?

    - by Kip
    Does it matter how many files I keep in a single directory? If so, how many files in a directory is too many, and what are the impacts of having too many files? (This is on a Linux server.) Background: I have a photo album website, and every image uploaded is renamed to an 8-hex-digit id (say, a58f375c.jpg). This is to avoid filename conflicts (if lots of "IMG0001.JPG" files are uploaded, for example). The original filename and any useful metadata is stored in a database. Right now, I have somewhere around 1500 files in the images directory. This makes listing the files in the directory (through FTP or SSH client) take a few seconds. But I can't see that it has any affect other than that. In particular, there doesn't seem to be any impact on how quickly an image file is served to the user. I've thought about reducing the number of images by making 16 subdirectories: 0-9 and a-f. Then I'd move the images into the subdirectories based on what the first hex digit of the filename was. But I'm not sure that there's any reason to do so except for the occasional listing of the directory through FTP/SSH.

    Read the article

  • SQL Server insert with XML parameter - empty string not converting to null for numeric

    - by Mayo
    I have a stored procedure that takes an XML parameter and inserts the "Entity" nodes as records into a table. This works fine unless one of the numeric fields has a value of empty string in the XML. Then it throws an "error converting data type nvarchar to numeric" error. Is there a way for me to tell SQL to convert empty string to null for those numeric fields in the code below? -- @importData XML <- stored procedure param DECLARE @l_index INT EXECUTE sp_xml_preparedocument @l_index OUTPUT, @importData INSERT INTO dbo.myTable ( [field1] ,[field2] ,[field3] ) SELECT [field1] ,[field2] ,[field3] FROM OPENXML(@l_index, 'Entities/Entity', 1) WITH ( field1 int 'field1' ,field2 varchar(40) 'field2' ,field3 decimal(15, 2) 'field3' ) EXECUTE sp_xml_removedocument @l_index EDIT: And if it helps, sample XML. Error is thrown unless I comment out field3 in the code above or provide a value in field3 below. <?xml version="1.0" encoding="utf-16"?> <Entities> <Entity> <field1>2435</field1> <field2>843257-3242</field2> <field3 /> </Entity> </Entities>

    Read the article

  • Oracle rownum in db2 - Java data archiving

    - by HonorGod
    I have a data archiving process in java that moves data between db2 and sybase. FYI - This is not done through any import/export process because there are several conditions on each table that are available on run-time and so this process is developed in java. Right now I have single DatabaseReader and DatabaseWriter defined for each source and destination combination so that data is moved in multiple threads. I guess I wanted to expand this further where I can have Multiple DatabaseReaders and Multiple DatabaseWriters defined for each source and destination combination. So, for example if the source data is about 100 rows and I defined 10 readers and 10 writer, each reader will read 10 rows and give them to the writer. I hope process will give me extreme performance depending on the resources available on the server [CPU, Memory etc]. But I guess the problem is these source tables do not have primary keys and it is extremely difficult to grab rows in multiple sets. Oracle provides rownum concept and i guess the life is much simpler there....but how about db2? How can I achieve this behavior with db2? Is there a way to say fetch first 10 records and then fetch next 10 records and so on? Any suggestions / ideas ? Db2 Version - DB2 v8.1.0.144 Fix Pack Num - 16 Linux

    Read the article

  • preg_match , pattern, php

    - by Michael
    I'm trying to extract some specific pictures from html content . Currently I have the following array [1] = Array ( [0] = BuEZm5g!Wk~$(KGrHqQH-DgEvrb2CLuOBL-vbkHlKw~~_14.JPG#!BuEZm5g!Wk~$(KGrHqQH-DgEvrb2CLuOBL-vbkHlKw~~_12.JPG|1#http://i.ebayimg.com/08#!BuEZqMQBGk~$(KGrHqQH-DoEvrKYSoPiBL-vb)WgLw~~_14.JPG#!BuEZqMQBGk~$(KGrHqQH-DoEvrKYSoPiBL-vb)WgLw~~_12.JPG|2#http://i.ebayimg.com/13#!BuEZtkw!2k~$(KGrHqYH-EYEvsh4EjJSBL-vb-bLow~~_14.JPG#!BuEZtkw!2k~$(KGrHqYH-EYEvsh4EjJSBL-vb-bLow~~_12.JPG|3#http://i.ebayimg.com/03#!BuEZ)!wEGk~$(KGrHqYH-DwEvq8Z1JuQBL-vcMOoFQ~~_14.JPG#!BuEZ)!wEGk~$(KGrHqYH-DwEvq8Z1JuQBL-vcMOoFQ~~_12.JPG|4#http://i.ebayimg.com/09#!BuEZ1IQEGk~$(KGrHqEH-D0EvqwClvviBL-vcclJwg~~_14.JPG#!BuEZ1IQEGk~$(KGrHqEH-D0EvqwClvviBL-vcclJwg~~_12.JPG|5#http://i.ebayimg.com/17#!BuEZ4FQCWk~$(KGrHqUH-DMEvS+,FRj5BL-vco)Qgg~~_14.JPG#!BuEZ4FQCWk~$(KGrHqUH-DMEvS+,FRj5BL-vco)Qgg~~_12.JPG|6#http://i.ebayimg.com/17#!BuEZ7FQEWk~$(KGrHqYH-EYEvsh4EjJSBL-vc1v2Hg~~_14.JPG#!BuEZ7FQEWk~$(KGrHqYH-EYEvsh4EjJSBL-vc1v2Hg~~_12.JPG|7#http://i.ebayimg.com/12#!BuEZ-c!Bmk~$(KGrHqQH-CYEvr5z9)NVBL-vdC,)Mg~~_14.JPG#!BuEZ-c!Bmk~$(KGrHqQH-CYEvr5z9)NVBL-vdC,)Mg~~_12.JPG|8#http://i.ebayimg.com/20#!BuE,CNgCWk~$(KGrHqIH-CIEvqKBurmhBL-vdRBe3!~~_14.JPG#!BuE,CNgCWk~$(KGrHqIH-CIEvqKBurmhBL-vdRBe3!~~_12.JPG|9#http://i.ebayimg.com/24#!BuE,FN!EWk~$(KGrHqUH-C0EvsBdjbv0BL-vdeFkD!~~_14.JPG#!BuE,FN!EWk~$(KGrHqUH-C0EvsBdjbv0BL-vdeFkD!~~_12.JPG|10#http://i.ebayimg.com/16#!BuE,IOgBmk~$(KGrHqEH-EMEvpym7mSLBL-vdqI6nw~~_14.JPG#!BuE,IOgBmk~$(KGrHqEH-EMEvpym7mSLBL-vdqI6nw~~ ) I need to extract only the pictures that start with ! and ends in 12.JPG . An example of expected result is !BuEZm5g!Wk~$(KGrHqQH-DgEvrb2CLuOBL-vbkHlKw~~_12.JPG . I have a pattern but it doesn't work for some reasons that I don't know . It is $pattern = "/#\!(.*)_12.JPG/"; Thank you in advance for any help

    Read the article

  • Django 1.4.1 error loading MySQLdb module when attempting 'python manage.py shell'

    - by Paul
    I am trying to set up MySQL, and can't seem to be able to enter the Django manage.py shell interpreter. Getting the output below: rrdhcp-10-32-44-126:django pavelfage$ python manage.py shell Traceback (most recent call last): File "manage.py", line 10, in <module> execute_from_command_line(sys.argv) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/core/management/__init__.py", line 443, in execute_from_command_line utility.execute() File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/core/management/__init__.py", line 382, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/core/management/base.py", line 196, in run_from_argv self.execute(*args, **options.__dict__) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/core/management/base.py", line 232, in execute output = self.handle(*args, **options) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/core/management/base.py", line 371, in handle return self.handle_noargs(**options) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/core/management/commands/shell.py", line 45, in handle_noargs from django.db.models.loading import get_models File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/db/__init__.py", line 40, in <module> backend = load_backend(connection.settings_dict['ENGINE']) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/db/__init__.py", line 34, in __getattr__ return getattr(connections[DEFAULT_DB_ALIAS], item) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/db/utils.py", line 92, in __getitem__ backend = load_backend(db['ENGINE']) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/db/utils.py", line 24, in load_backend return import_module('.base', backend_name) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/utils/importlib.py", line 35, in import_module __import__(name) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/db/backends/mysql/base.py", line 16, in <module> raise ImproperlyConfigured("Error loading MySQLdb module: %s" % e) django.core.exceptions.ImproperlyConfigured: Error loading MySQLdb module: No module named _mysql Any suggestions really appreciated.

    Read the article

  • mono --aot with MinGW: unknown pseudo-op: `.local'

    - by Jared Updike
    Can I user mono's AOT feature to natively "pre-compile" .NET DLLs (and or EXEs) to make them harder to reverse engineer? If so, how do I get mono/AOT working in Windows 7? (I'm running x64 but the app is targeting x86 explicitly.) I just installed Mono 2.6.3 and MinGW 5.1.6 and I'm trying to AOT compile an exe (or a dll, it doesn't matter). I get screens and screens of error messages: C:\Users\jupdike\AppData\Local\Temp\mono_aot_XSDEAV:533: Error: junk at end of line, first unrecognized character is `H' C:\Users\jupdike\AppData\Local\Temp\mono_aot_XSDEAV:539: Error: unknown pseudo-op: `.local' C:\Users\jupdike\AppData\Local\Temp\mono_aot_XSDEAV:546: Warning: .size pseudo-op used outside of .def/.endef ignored. C:\Users\jupdike\AppData\Local\Temp\mono_aot_XSDEAV:546: Error: junk at end of line, first unrecognized character is `H' I can open the generated assembly code but I have no idea why the assembler chokes on it: .size HappyForms_TextForm__ctor_string_string_string_bool,.-HappyForms_TextForm__ctor_string_string_string_bool (533) _.Lme_a: .Lme_a: .balign 16 _.Lm_b: .Lm_b: .local HappyForms_TextForm_get_InputValue (539) _HappyForms_TextForm_get_InputValue: HappyForms_TextForm_get_InputValue: .byte 85,139,236,131,236,8,139,69,8,139,128,216,2,0,0,131,236,12,80,139,0,144,144,144,255,144,200,2,0,0,131,196 .byte 16,201,195 .size HappyForms_TextForm_get_InputValue,.-HappyForms_TextForm_get_InputValue (546) (numbers above in parens are line numbers)

    Read the article

  • How to add an image when generating a pdf-file with javascript. When adding the imageLoadFromURL no pdf is generated

    - by Angu Handschuh
    I've got a problem with adding an image when generating a pdf-file with javascript. Here is my code: <!DOCTYPE html> <html> <head> <script type="text/javascript" src="base64.js"></script> <script type="text/javascript" src="sprintf.js"></script> <script type="text/javascript" src="jspdf.js"></script> <script> function demo1() { var name = prompt('Name: '); var nachname=prompt('Nachname: '); var doc = new jsPDF(); doc.setFontSize(22); doc.text(20, 20, 'Der eingegebene Text'); doc.setFontSize(16); doc.imageLoadFromUrl('image.jpg'); doc.imagePlace(20, 40); doc.text(20, 30, 'Name: ' + name); doc.text(20,40,'Nachname:'+nachname); // Output as Data URI doc.output('datauri'); } </script> </head> <body> <h2> Ein Document </h2> <a href="javascript:demo1()"> PDF erstellen </a> </body> </html> Before adding doc.imageLoadFromUrl('image.jpg'); doc.imagePlace(20, 40); the code runs without picture. It starts with a demand note for the name and the second name, after this it generates a pdf-file. But when adding the imageLoad-Method there is no pdf-file generated. Does anyone konws how to solve this problem?

    Read the article

  • Looking for ideas how to refactor my (complex) algorithm

    - by _simon_
    I am trying to write my own Game of Life, with my own set of rules. First 'concept', which I would like to apply, is socialization (which basicaly means if the cell wants to be alone or in a group with other cells). Data structure is 2-dimensional array (for now). In order to be able to move a cell to/away from a group of another cells, I need to determine where to move it. The idea is, that I evaluate all the cells in the area (neighbours) and get a vector, which tells me where to move the cell. Size of the vector is 0 or 1 (don't move or move) and the angle is array of directions (up, down, right, left). This is a image with representation of forces to a cell, like I imagined it (but reach could be more than 5): Let's for example take this picture: Forces from lower left neighbour: down (0), up (2), right (2), left (0) Forces from right neighbour : down (0), up (0), right (0), left (2) sum : down (0), up (2), right (0), left (0) So the cell should go up. I could write an algorithm with a lot of if statements and check all cells in the neighbourhood. Of course this algorithm would be easiest if the 'reach' parameter is set to 1 (first column on picture 1). But what if I change reach parameter to 10 for example? I would need to write an algorithm for each 'reach' parameter in advance... How can I avoid this (notice, that the force is growing potentialy (1, 2, 4, 8, 16, 32,...))? Can I use specific design pattern for this problem? Also: the most important thing is not speed, but to be able to extend initial logic. Things to take into consideration: reach should be passed as a parameter i would like to change function, which calculates force (potential, fibonacci) a cell can go to a new place only if this new place is not populated watch for corners (you can't evaluate right and top neighbours in top-right corner for example)

    Read the article

  • OSMDroid simple example required

    - by Bex
    Hi! I am trying to create an app that uses offline maps and custom tiles. For this I have decided to use OSMDroid and have included the jar within my project. I will create my custom tiles using MOBAC. I have been directed to these examples: http://code.google.com/p/osmdroid/source/browse/#svn%2Ftrunk%2FOpenStreetMapViewer%2Fsrc%2Forg%2Fosmdroid%2Fsamples but I am struggling to follow them as I am new to both java and android. I have created a class file called test (which I have created following an example!): public class test extends Activity { /** Called when the activity is first created. */ protected static final String PROVIDER_NAME = LocationManager.GPS_PROVIDER; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); MapView map = (MapView) findViewById(R.id.map); map.setTileSource(TileSourceFactory.MAPQUESTOSM); map.setBuiltInZoomControls(true); map.setMultiTouchControls(true); map.getController().setZoom(16); map.getController().setCenter(new GeoPoint(30266000, -97739000)); } } with a layout file: <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:orientation="vertical" android:layout_width="fill_parent" android:layout_height="fill_parent"> <org.osmdroid.views.MapView android:id="@+id/map" android:layout_width="fill_parent" android:layout_height="fill_parent" tilesource="MapquestOSM" android:layout_weight="1" /> </LinearLayout> When I run this I see no map, just an empty grid. I think this is due to my tilesource but I'm not sure what I need to change it to. Can anyone help? Bex

    Read the article

  • Iteration speed of int vs long

    - by jqno
    I have the following two programs: long startTime = System.currentTimeMillis(); for (int i = 0; i < N; i++); long endTime = System.currentTimeMillis(); System.out.println("Elapsed time: " + (endTime - startTime) + " msecs"); and long startTime = System.currentTimeMillis(); for (long i = 0; i < N; i++); long endTime = System.currentTimeMillis(); System.out.println("Elapsed time: " + (endTime - startTime) + " msecs"); Note: the only difference is the type of the loop variable (int and long). When I run this, the first program consistently prints between 0 and 16 msecs, regardless of the value of N. The second takes a lot longer. For N == Integer.MAX_VALUE, it runs in about 1800 msecs on my machine. The run time appears to be more or less linear in N. So why is this? I suppose the JIT-compiler optimizes the int loop to death. And for good reason, because obviously it doesn't do anything. But why doesn't it do so for the long loop as well? A colleague thought we might be measuring the JIT compiler doing its work in the long loop, but since the run time seems to be linear in N, this probably isn't the case.

    Read the article

  • "no block given" errors with cache_money

    - by emh
    i've inherited a site that in production is generating dozens of "no block given" exceptions every 5 minutes. the top of the stack trace is: vendor/gems/nkallen-cache-money-0.2.5/lib/cash/accessor.rb:42:in `add' vendor/gems/nkallen-cache-money-0.2.5/lib/cash/accessor.rb:33:in `get' vendor/gems/nkallen-cache-money-0.2.5/lib/cash/accessor.rb:22:in `call' vendor/gems/nkallen-cache-money-0.2.5/lib/cash/accessor.rb:22:in `fetch' vendor/gems/nkallen-cache-money-0.2.5/lib/cash/accessor.rb:31:in `get' so it appears that the problem is in the cache money plugin. has anyone experienced something similar? i've cut and pasted the relevant code below -- anyone more familiar with blocks able to discern any obvious problems? 11 def fetch(keys, options = {}, &block) 12 case keys 13 when Array 14 keys = keys.collect { |key| cache_key(key) } 15 hits = repository.get_multi(keys) 16 if (missed_keys = keys - hits.keys).any? 17 missed_values = block.call(missed_keys) 18 hits.merge!(missed_keys.zip(Array(missed_values)).to_hash) 19 end 20 hits 21 else 22 repository.get(cache_key(keys), options[:raw]) || (block ? block.call : nil) 23 end 24 end 25 26 def get(keys, options = {}, &block) 27 case keys 28 when Array 29 fetch(keys, options, &block) 30 else 31 fetch(keys, options) do 32 if block_given? 33 add(keys, result = yield(keys), options) 34 result 35 end 36 end 37 end 38 end 39 40 def add(key, value, options = {}) 41 if repository.add(cache_key(key), value, options[:ttl] || 0, options[:raw]) == "NOT_STORED\r\n" 42 yield 43 end 44 end

    Read the article

< Previous Page | 248 249 250 251 252 253 254 255 256 257 258 259  | Next Page >