Search Results

Search found 2528 results on 102 pages for 'cheungcc 2000'.

Page 77/102 | < Previous Page | 73 74 75 76 77 78 79 80 81 82 83 84  | Next Page >

  • obj-c problem setting array with componentsSeperatedByString

    - by Brodie4598
    I have a data source with about 2000 lines that look like the following: 6712,Anaktuvuk Pass Airport,Anaktuvuk Pass,United States,AKP,PAKP,68.1336,-151.743,2103,-9,A What I am interested in is the 6th section of this string so I want to turn it into an array, then i want to check the 6th section [5] for an occurrance of that string "PAKP" Code: NSBundle *bundle = [NSBundle mainBundle]; NSString *airportsPath = [bundle pathForResource:@"airports" ofType:@"dat"]; NSData *data = [NSData dataWithContentsOfFile:airportsPath]; NSString *dataString = [[[NSString alloc] initWithData:data encoding:NSUTF8StringEncoding] autorelease]; NSArray *dataArray = [dataString componentsSeparatedByString:@"\n"]; NSRange locationOfAirport; NSString *workingString = [[NSString alloc]initWithFormat:@""]; NSString *searchedAirport = [[NSString alloc]initWithFormat:@""]; NSString *airportData = [[NSString alloc]initWithFormat:@""]; int d; for (d=0; d < [dataArray count]; d=d+1) { workingString = [dataArray objectAtIndex:d]; testTextBox = workingString; //works correctly NSArray *workingArray = [workingString componentsSeparatedByString:@","]; testTextBox2 = [workingArray objectAtIndex: 0]; //correctly displays the first section "6712" testTextBox3 = [workingArray objectAtIndex:1] //throws exception index beyond bounds locationOfAirport = [[workingArray objectAtIndex:5] rangeOfString:@"PAKP"]; } the problem is that when the workingArray populates, it only populates with a single object (the first component of the string which is "6712". If i have it display the workingString, it correctly displays the entire string, but for some reason, it isn't correctly making the array using the commas. i tried it without using the data file and it worked fine, so the problem comes from how I am importing the data. ideas?

    Read the article

  • Inserting an image into sqlserver gives an "operand type clash"

    - by Termedi
    I'm trying to save an image in a sql server 2000 database. The data type of the column is image. Here is the code: Image Upload: <?php include('config.php'); if(is_uploaded_file($_FILES['userfile']['tmp_name'])) { $fileName = $_FILES['userfile']['name']; $tmpName = $_FILES['userfile']['tmp_name']; $fileSize = $_FILES['userfile']['size']; $fileType = $_FILES['userfile']['type']; $size = filesize($tmpName); set_magic_quotes_runtime(0);//to desactive the default escape spacials caracters made by PHP in the externes files $img_binaire = base64_encode(fread(fopen(str_replace("'","''",$tmpName), "r"), $size)); $query = "INSERT INTO test_image (image_name, image_content, image_size) ". "VALUES ('{$fileName}','{$img_binaire}', '{$size}')"; odbc_exec($conn, $query) or die('Error, query failed'); echo "<br>File $fileName uploaded<br>"; echo "<br>File Size: $fileSize <br>"; } ?> Image Show: <?php include('config.php'); $sql = "select * from test_image where id =2"; $rsl = odbc_exec($conn, $sql); $image_info = odbc_fetch_array($rsl); //$count = sizeof($image_info['image_content']); //header('Accept-Ranges: bytes'); //header('Content-Length: '.$image_info['image_size']); //header("Content-length: 17397"); header('Content-Type: image/jpeg'); echo base64_decode($image_info['image_content']); //echo bindec($image_info['image_content']); ?> It gives the following error: Error: Warning: odbc_exec() [function.odbc-exec]: SQL error: [Microsoft][ODBC SQL Server Driver][SQL Server]Operand type clash: text is incompatible with image, SQL state 22005 in SQLExecDirect in C:\xampp\htdocs\test\upload.php on line 25 Error, query failed What do I need to do differently?

    Read the article

  • Getting svn: E170000: Unrecognized URL scheme for my custom Svn Gradle plugin

    - by Ip Doh
    I wrote a custom gradle plugin using groovy to do basic svn tasks like, Checkout, Clean, Tag etc. The groovy class calls the svn command line client to do these operations, It works fine when i run it on my windows system but the same plugin gives the following error when i run it on a linux system (Centos). svn: E170000: Unrecognized URL scheme for '%22https://source.mycompany.net/svn/MyProject/trunk%22' Am able to make the same calls to the command line client through the command prompt or shell script without any issues. So what is the difference with Here is my code sample String command =String.format("svn co -r %d --non-interactive --trust-server-cert -- username %s --password %s --depth infinity \"%s\" \"%s\"", getRevision(), getUserName(), getUserPassword(), getSrcUrl(), getDir()); Process svnProcess = Runtime.getRuntime().exec(command); BufferedReader stdInput = new BufferedReader(new InputStreamReader(svnProcess.getInputStream())); BufferedReader stdError = new BufferedReader(new InputStreamReader(svnProcess.getErrorStream())); String statusOutputLine ="" while ((statusOutputLine = stdInput.readLine()) != null) { logger.quiet(" " + statusOutputLine); } while (( statusOutputLine = stdError.readLine()) != null) { logger.error(statusOutputLine) throw new Exception(statusOutputLine) } logger.quiet("Successfully Checked out the work space") i do have neon installed on the system -bash-4.1$ svn --version svn, version 1.6.11 (r934486) compiled Jun 25 2011, 11:30:15 Copyright (C) 2000-2009 CollabNet. Subversion is open source software, see http://subversion.tigris.org/ This product includes software developed by CollabNet (http://www.Collab.Net/). The following repository access (RA) modules are available: ra_neon : Module for accessing a repository via WebDAV protocol using Neon. handles 'http' scheme handles 'https' scheme ra_svn : Module for accessing a repository using the svn network protocol. with Cyrus SASL authentication handles 'svn' scheme ra_local : Module for accessing a repository on local disk. handles 'file' scheme

    Read the article

  • Microsoft OLE DB Provider for SQL Server error '80040e14' Could not find stored procedure

    - by BBlake
    I am migrating a classic ASP web app to new servers. The database back end is migrating from SQL Server 2000 to SQL Server 2008, and the app is moving from Win2000 x86 to Win2003R2 x64. I am getting the above error on every single stored procedure call within the application. I have verified: Yes, the SQL user is set up, using correct username and password Yes, the SQL user has execute permissions on the stored procedures in the database Yes, I have updated the TypeLib references to the new UUID Yes, I have logged into the database via SSMS with the SQL user id and it can see and execute the stored procedures just fine in SSMS, but not from the web app. Yes, the SQL user has the database set as its default database. The most frustrating thing is it works fine on the DEV server, but not on the production server. I have gone through every IIS setting 5 or 6 times and the web app is set up precisely the same in both environments. The only difference is the database server name in the connection string (DEV vs prod) EDIT: I have also tried pointing the prod web box at the dev database server and get the same error so I'm fairly sure the issue isn't on the database side.

    Read the article

  • FFMpeg Error av_interleaved_write_frame():

    - by rajaneesh
    this my code . after running php code FFmpeg version 0.5, Copyright (c) 2000-2009 Fabrice Bellard, et al. configuration: --prefix=/usr --libdir=/usr/lib --shlibdir=/usr/lib --mandir=/usr/share/man --incdir=/usr/include --enable-libamr-nb --enable-libamr-wb --enable-libdirac --enable-libfaac --enable-libfaad --enable-libmp3lame --enable-libtheora --enable-libx264 --enable-gpl --enable-nonfree --enable-postproc --enable-pthreads --enable-shared --enable-swscale --enable-x11grab libavutil 49.15. 0 / 49.15. 0 libavcodec 52.20. 0 / 52.20. 0 libavformat 52.31. 0 / 52.31. 0 libavdevice 52. 1. 0 / 52. 1. 0 libswscale 0. 7. 1 / 0. 7. 1 libpostproc 51. 2. 0 / 51. 2. 0 built on Nov 6 2009 19:05:03, gcc: 4.1.2 20080704 (Red Hat 4.1.2-46) Seems stream 0 codec frame rate differs from container frame rate: 50.00 (50/1) - 25.00 (25/1) Input #0, flv, from 'demo.flv': Duration: 00:00:30.83, start: 0.000000, bitrate: 546 kb/s Stream #0.0: Video: h264, yuv420p, 640x360 [PAR 1:1 DAR 16:9], 546 kb/s, 25 tbr, 1k tbn, 50 tbc Stream #0.1: Audio: aac, 44100 Hz, stereo, s16 Output #0, image2, to 'demo.jpg': Stream #0.0: Video: mjpeg, yuvj420p, 640x360 [PAR 1:1 DAR 16:9], q=2-31, 200 kb/s, 90k tbn, 1 tbc Stream mapping: Stream #0.0 - #0.0 Press [q] to stop encoding av_interleaved_write_frame(): I/O error occurred Usually that means that input file is truncated and/or corrupted.

    Read the article

  • #Error showing up in multiple LEFT JOIN statement Access query when value should be NULL

    - by lar
    I'm trying to return an ID's last 4 years of data, if existing. The table (call it A_TABLE) looks like this: ID, Year, Val The idea behind the query is this: for each ID/Year in the table, LEFT JOIN with Year-1, Year-2, and Year-3 (to get 4 years of data) and then return Val for each year. Here's the SQL: SELECT a.ID, a.year AS [Year], a.Val AS VAL, a1.year AS [Year-1], a1.Val AS [VAL-1], a2.year AS [Year-2], a2.Val AS [VAL-2], a3.year AS [Year-3], a3.Val AS [VAL-3] FROM ( ([A_TABLE] AS a LEFT JOIN [A_TABLE] AS a1 ON (a.ID = a1.ID) AND (a.year = a1.year+1)) LEFT JOIN [A_TABLE] AS a2 ON (a.ID = a2.ID) AND (a.year = a2.year+2)) LEFT JOIN [A_TABLE] AS a3 ON (a.ID = a3.ID) AND (a.year = a3.year+3) The problem is that, for past years where there is no data (eg, Year-1), I see "#Error" in the appropriate VAL column (eg, [VAL-1]). The weird thing is, I see the expected "null" in the Year column (eg, [YEAR-1]). Some sample data: ID YEAR VAL Dave 2004 1 Dave 2006 2 Dave 2007 3 Dave 2008 5 Dave 2009 0 outputs like this: ID YEAR VAL YEAR-1 VAL-1 YEAR-2 VAL-2 YEAR-3 VAL-3 Dave 2004 1 #Error #Error #Error Dave 2006 2 #Error 2004 1 #Error Dave 2007 3 2006 2 #Error 2004 1 Dave 2008 5 2007 3 2006 2 #Error Dave 2009 0 2008 5 2007 3 2006 2 Does that make sense? Why am I getting the appropriate NULL val for the non-existent YEARs, but an #Error for the non-existent VALs? (This is Access 2000. Conditional statements like "IIf(a1.val is null, -999, a1.val)" do not seem to do anything.) EDIT: It turns out that the errors are somehow caused by the fact that A_TABLE is actually a query. When I put all the data into an actual table and run the same query, everything shows up as it should. Thanks for the help, everyone.

    Read the article

  • Syntax Error in MySql StoredProc

    - by karthik
    I am using the below stored proc in mysql to generate the insert statements. I am getting the following error : Script line: 4 You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '\’”\’,',’ifnull(‘,column_name,’,””)’,',\’”\’)')) INTO @S' at line 12 What would be the syntax problem in that ? DELIMITER $$ DROP PROCEDURE IF EXISTS `InsGen` $$ CREATE DEFINER=`root`@`localhost` PROCEDURE `InsGen`(in_db varchar(20),in_table varchar(20),in_file varchar(100)) BEGIN declare Whrs varchar(500); declare Sels varchar(500); declare Inserts varchar(2000); declare tablename varchar(20); set tablename=in_table; select tablename; # Comma separated column names – used for Select select group_concat(concat(‘concat(\’”\’,',’ifnull(‘,column_name,’,””)’,',\’”\’)')) INTO @Sels from information_schema.columns where table_schema=’test’ and table_name=tablename; # Comma separated column names – used for Group By select group_concat(‘`’,column_name,’`') INTO @Whrs from information_schema.columns where table_schema=’test’ and table_name=tablename; #Main Select Statement for fetching comma separated table values set @Inserts=concat(“select concat(‘insert into “, in_db,”.”,tablename,” values(‘,concat_ws(‘,’,”,@Sels,”),’);’) from “, in_db,”.”,tablename,” group by “,@Whrs, ” INTO OUTFILE ‘”, in_file ,”‘”); PREPARE Inserts FROM @Inserts; EXECUTE Inserts; END $$ DELIMITER ;

    Read the article

  • Creating a service (SERVICE_ACCEPT_SESSIONCHANGE)

    - by Ron
    Hi there, I am trying to create a service following the example documented in the link below: http://msdn.microsoft.com/en-us/library/bb540475(v=VS.85).aspx What I am interested in is to be able to catch user "lock" and "unlock" workstation events. Using the code on from the example provided, I modified the following: Line 15: Original: VOID WINAPI SvcCtrlHandler( DWORD ); Modified: DWORD WINAPI SvcCtrlHandler( DWORD, DWORD, LPVOID, LPVOID ); Line 141: Original: gSvcStatusHandle = RegisterServiceCtrlHandler( SVCNAME, SvcCtrlHandler); Modified: gSvcStatusHandle = RegisterServiceCtrlHandlerEx( SVCNAME, SvcCtrlHandler, NULL); Line 244: Original: SvcStatus.dwControlsAccepted = SERVICE_ACCEPT_STOP; Modified: gSvcStatus.dwControlsAccepted = SERVICE_ACCEPT_STOP|SERVICE_ACCEPT_SESSIONCHANGE; Line 266: Original: VOID WINAPI SvcCtrlHandler( DWORD dwCtrl ) { // Handle the requested control code. switch(dwCtrl) { case SERVICE_CONTROL_STOP: ReportSvcStatus(SERVICE_STOP_PENDING, NO_ERROR, 0); // Signal the service to stop. SetEvent(ghSvcStopEvent); ReportSvcStatus(gSvcStatus.dwCurrentState, NO_ERROR, 0); return; case SERVICE_CONTROL_INTERROGATE: break; default: break; } }` Modified: DWORD WINAPI SvcCtrlHandler( DWORD dwControl, DWORD dwEventType, LPVOID lpEventData, LPVOID lpContext ) { DWORD dwErrorCode = NO_ERROR; switch(dwControl) { case SERVICE_CONTROL_STOP: ReportSvcStatus(SERVICE_STOP_PENDING, NO_ERROR, 0); // Signal the service to stop. SetEvent(ghSvcStopEvent); ReportSvcStatus(gSvcStatus.dwCurrentState, NO_ERROR, 0); break; case SERVICE_CONTROL_INTERROGATE: break; case SERVICE_CONTROL_SESSIONCHANGE: ReportSvcStatus(gSvcStatus.dwCurrentState, NO_ERROR, 0); break; default: break; } return dwErrorCode; } With the changes above, my service compiled and install fine. But when I try to start my service on my Windows 2000 machine, it does not start properly (it will be stuck on the "starting" status) Can anyone please advise? Thank you in advance, Ron

    Read the article

  • Convert SWF file to FLV with FFMPEG & getting error "could not find codec parameters"

    - by Ritesh
    Hi I am trying to convert SWF file to FLV, but i am getting same eror C:\Users\AdministratorC:/ffmpeg/ffmpeg.exe -i C:/xampplite/htdocs/ffmpeg/1.swf C:/xampplite/htdocs/ffmpeg/file1.flv FFmpeg version SVN-r16573, Copyright (c) 2000-2009 Fabrice Bellard, et al. configuration: --extra-cflags=-fno-common --enable-memalign-hack --enable-pthr eads --enable-libmp3lame --enable-libxvid --enable-libvorbis --enable-libtheora --enable-libspeex --enable-libfaac --enable-libgsm --enable-libx264 --enable-lib schroedinger --enable-avisynth --enable-swscale --enable-gpl libavutil 49.12. 0 / 49.12. 0 libavcodec 52.10. 0 / 52.10. 0 libavformat 52.23. 1 / 52.23. 1 libavdevice 52. 1. 0 / 52. 1. 0 libswscale 0. 6. 1 / 0. 6. 1 built on Jan 13 2009 02:57:09, gcc: 4.2.4 C:/xampplite/htdocs/ffmpeg/1.swf: could not find codec parameters Please solve this problem, what i am doing wrong??

    Read the article

  • javascript table sorting/paging (client-side). How big is too big?

    - by Aheho
    I'm using a jQuery plugin called Tablesorter to do client-side sorting of a log table in one of my applications. I am also making use of the tablepager add-in. I really like the responsiveness that client-side sorting and paging brings to the party. I also like how you don't have to hit the web server or database repeatedly. However I can see that, in time, the log I'm displaying could grow quite large. I'm sure there comes a point where client-side paging and sorting is going to be impractical. What point will this technique begin to collapse under it's own weight? 500 records? 2000 records? 10,000 records? EDIT: In nutshell, what criteria would you use to determine if you are going to use client-side sorting/paging as opposed to server-side paging? Does the size of expected result set factor into your decision? Where is the tipping point?

    Read the article

  • WPF, C# - Making Intellisense/Autocomplete list, fastest way to filter list of strings

    - by user559548
    Hello everyone, I'm writing an Intellisense/Autocomplete like the one you find in Visual Studio. It's all fine up until when the list contains probably 2000+ items. I'm using a simple LINQ statement for doing the filtering: var filterCollection = from s in listCollection where s.FilterValue.IndexOf(currentWord, StringComparison.OrdinalIgnoreCase) >= 0 orderby s.FilterValue select s; I then assign this collection to a WPF Listbox's ItemSource, and that's the end of it, works fine. Noting that, the Listbox is also virtualised as well, so there will only be at most 7-8 visual elements in memory and in the visual tree. However the caveat right now is that, when the user types extremely fast in the richtextbox, and on every key up I execute the filtering + binding, there's this semi-race condition, or out of sync filtering, like the first key stroke's filtering could still be doing it's filtering or binding work, while the fourth key stroke is also doing the same. I know I could put in a delay before applying the filter, but I'm trying to achieve a seamless filtering much like the one in Visual Studio. I'm not sure where my problem exactly lies, so I'm also attributing it to IndexOf's string operation, or perhaps my list of string's could be optimised in some kind of index, that could speed up searching. Any suggestions of code samples are much welcomed. Thanks.

    Read the article

  • Why does my ActivePerl program report 'Sorry. Ran out of threads'?

    - by Zaid
    Tom Christiansen's example code (à la perlthrtut) is a recursive, threaded implementation of finding and printing all prime numbers between 3 and 1000. Below is a mildly adapted version of the script #!/usr/bin/perl # adapted from prime-pthread, courtesy of Tom Christiansen use strict; use warnings; use threads; use Thread::Queue; sub check_prime { my ($upstream,$cur_prime) = @_; my $child; my $downstream = Thread::Queue->new; while (my $num = $upstream->dequeue) { next unless ($num % $cur_prime); if ($child) { $downstream->enqueue($num); } else { $child = threads->create(\&check_prime, $downstream, $num); if ($child) { print "This is thread ",$child->tid,". Found prime: $num\n"; } else { warn "Sorry. Ran out of threads.\n"; last; } } } if ($child) { $downstream->enqueue(undef); $child->join; } } my $stream = Thread::Queue->new(3..shift,undef); check_prime($stream,2); When run on my machine (under ActiveState & Win32), the code was capable of spawning only 118 threads (last prime number found: 653) before terminating with a 'Sorry. Ran out of threads' warning. In trying to figure out why I was limited to the number of threads I could create, I replaced the use threads; line with use threads (stack_size => 1);. The resultant code happily dealt with churning out 2000+ threads. Can anyone explain this behavior?

    Read the article

  • jQuery background-position animation

    - by depi
    Hi guys, I've created an image which is basically a CSS sprite of 3 images together. It's size is 278x123 so they are basically 3 images of 278x41. What I am trying to do is to make an animation of that by changing the background position. I've tried many things, one of my not very working solution is the following: var $slogan = $('#header h2 span'); $slogan.css({backgroundPosition: '0px 0px'}); function slogan_animation() { if ($slogan.css('background-position') == '0px 0px') { $slogan.fadeIn('slow').css('background-position', '0px -41px').fadeOut('slow'); } else if ($slogan.css('background-position') == '0px -41px') { $slogan.fadeIn('slow').css('background-position', '0px -82px').fadeOut('slow'); } else if ($slogan.css('background-position') == '0px -82px') { $slogan.fadeIn('slow').css('background-position', '0px 0px').fadeOut('slow'); } } setInterval(slogan_animation, 2000); Any ideas how could I do that? Basically I just need to: - set my background position to "0px 0px", then move it to "0px -41px", then "0px -82px" and then loop it again from "0px 0px". It would be also great to have fadeIn() effect between those. Any ideas? Thank you.

    Read the article

  • Android Debugging InetAddress.isReachable

    - by badMonkey
    I am trying to figure out how to tell if a particular ipaddress is available in my android app during debugging ( I haven't tried this on an actual device ). From reading it appears that InetAddress.isReachable should do this for me. Initially I thought that I could code something like: InetAddress address = InetAddress.getByAddress( new byte[] { (byte) 192, (byte) 168, (byte) 254, (byte) 10 ); success = address.isReachable( 3000 ); This returns false even though I am reasonably sure it is a reachable address. I found that if I changed this to 127, 0, 0, 1 it returned success. My next attempt was same code, but I used the address I got from a ping of www.google.com ( 72.167.164.64 as of this writing ). No success. So then I tried a further example: int timeout = 2000; InetAddress[] addresses = InetAddress.getAllByName("www.google.com"); for (InetAddress address : addresses) { if ( address.isReachable(timeout)) { success = true; // just set a break point here } } I am relatively new to Java and Android so I suspect I am missing something, but I can't find anything that would indicate what that is.

    Read the article

  • Field specific errors for ETL

    - by AaronLS
    I am creating a ETL process in MS SQL Server and I would like to have errors specific to a particular column of a particular row. For example, the data is initially loaded from excel files into a table(we'll call the Initial table) where all columns are varchar(2000) and then I stage the data to another table(the DataTypedTable) that contains more specific data types (datetime,int, etc.) or more tightly constrained varchar lengths. I need to be able to create error messages for a specific field such as: "Jan. 13th" is not a valid date format for the submission date. Please use a format of MM/DD/YYYY These error messages would need to be stored in some way such that later in the process a automated process can create reports with the error messages such that each message references a specific row and field(someone will need to go back and correct the data in the source system and resubmit the excel file). So ideally it would be inserted into a Failures tables of some sort and contain the primary key of the failed row, the column name, and the error message. Question: So I am wondering if this can be accomplished with SSIS, or some open source tool like Talend, and if so, what would be your general approach? Or what hand coded approach you would take? Couple approaches I've thought of using SQL(up until no I have done ETL by hand in SQL procs, but I want to consider other approaches. Possible C# even.): Use a cursor to read through the Initial table, and for each row insert a blank record with only the primary key into the DataTyped table, then use a single update statement for each column, such that if that update fails I can insert a very specific error message specific to that column in the error messages table. Insert all the data as is into the DataTyped table, but have duplicate columns like SubmissionDate and SubmissionDateOld. After the initial insert the *Old columns have data, the rest are blank, and I have a single update for each column that sets the SubmissionDate based on the SubmissionDateOld. In addition to suggesting an approach, I'd like to know if you are using that approach or something similar already in the work you do.

    Read the article

  • Integrating Search Server 2008 Express with WSS 3.0

    - by Jason Kemp
    I'm setting up the environment for an intranet using WSS (Windows SharePoint Services) 3.0. The catch is getting the environment configured to work with MS Search Server 2008 Express. Here's the environment I'd like to setup: A: Web Server; Win Server 2003 SP2; WSS 3.0 SP2; IIS 6.0; .NET 3.5 SP1 B: Search Server; Win Server 2003 SP2; WSS 3.0 SP2; IIS 6.0; .NET 3.5 SP1; Search Server 2008 Express C: Database Server; Win Server 2003 SP2; SQL Server 2000 SP3 - Admin db, Content db, Config db, Search db The question is whether 3 servers can be used like the above configuration or if the Search Server (B) has to be combined with (A) since we're using the free Express version of the Search Server. The documentation from MS doesn't make it clear either way. I can attack this problem with trial and error but would rather not. The bigger question is: What is the best practice for a WSS / Search Server installation?

    Read the article

  • Help! The log file for database 'tempdb' is full. Back up the transaction log for the database to fr

    - by michael.lukatchik
    We're running SQL Server 2000. In our database, we have an "Orders" table with approximately 750,000 rows. We can perform simple SELECT statements on this table. However, when we want to run a query like SELECT TOP 100 * FROM Orders ORDER BY Date_Ordered DESC, we receive the following message: Error: 9002, Severity: 17, State: 6 The log file for database 'tempdb' is full. Back up the transaction log for the database to free up some log space. We have other tables in our database which are similar in size of the amount of records that are in the tables (i.e. 700,000 records). On these tables, we can run any queries we'd like and we never receive a message about 'tempdb being full'. To resolve this, we've backed up our database, shrunk the actual database and also shrunk the database and files in the tempdb system database, but this hasn't resolved the issue. The size of our log file is set to autogrow. We're not sure where to go next. Are there any ideas why we still might be receiving this message? Error: 9002, Severity: 17, State: 6 The log file for database 'tempdb' is full. Back up the transaction log for the database to free up some log space.

    Read the article

  • Segmentation fault while feeding in an mpeg file through ffmpeg

    - by angel6
    Hi, I've set up FFserver as the streaming server. I'm trying to feed in an mpeg file. But it comes up with a segmentation fault. Does anyone know how to fix this? The following is the command-line output I get $ ./ffmpeg -i test1.mpg http://localhost:8090/feed1.ffm FFmpeg version SVN-r22945, Copyright (c) 2000-2010 the FFmpeg developers built on Apr 22 2010 19:18:45 with gcc 4.4.1 configuration: --enable-gpl --enable-version3 --enable-nonfree --enable-postproc --enable-pthreads --enable-libfaac --enable-libfaad --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libtheora --enable-libx264 --enable-libxvid --enable-x11grab libavutil 50.14. 0 / 50.14. 0 libavcodec 52.66. 0 / 52.66. 0 libavformat 52.61. 0 / 52.61. 0 libavdevice 52. 2. 0 / 52. 2. 0 libswscale 0.10. 0 / 0.10. 0 libpostproc 51. 2. 0 / 51. 2. 0 [mpeg @ 0xab0c420]max_analyze_duration reached Input #0, mpeg, from 'test1.mpg': Duration: 00:00:20.96, start: 0.768300, bitrate: 269 kb/s Stream #0.0[0x1e0]: Video: mpeg1video, yuv420p, 160x120 [PAR 1:1 DAR 4:3], 104857 kb/s, 30 fps, 30 tbr, 90k tbn, 30 tbc Stream #0.1[0x1c0]: Audio: mp2, 32000 Hz, 2 channels, s16, 64 kb/s Output #0, ffm, to 'http://localhost:8090/feed1.ffm': Metadata: encoder : Lavf52.61.0 Stream #0.0: Audio: mp2, 22050 Hz, 1 channels, s16, 48 kb/s Stream #0.1: Video: mpeg1video, yuv420p, 160x128, q=2-31, 40 kb/s, 1000k tbn, 50 tbc Stream #0.2: Audio: libmp3lame, 22050 Hz, 1 channels, s16, 64 kb/s Stream #0.3: Video: msmpeg4, yuv420p, 352x240, q=2-31, 256 kb/s, 1000k tbn, 15 tbc Stream mapping: Stream #0.1 -> #0.0 Stream #0.0 -> #0.1 Stream #0.1 -> #0.2 Stream #0.0 -> #0.3 Press [q] to stop encoding Segmentation fault

    Read the article

  • Setting up ehcache replication - what multicast settings do I need?

    - by Darren Greaves
    I am trying to set up ehcache replication as documented here: http://ehcache.sourceforge.net/EhcacheUserGuide.html#id.s22.2 This is on a Windows machine but will ultimately run on Solaris in production. The instructions say to set up a provider as follows: <cacheManagerPeerProviderFactory class="net.sf.ehcache.distribution.RMICacheManagerPeerProviderFactory" properties="peerDiscovery=automatic, multicastGroupAddress=230.0.0.1, multicastGroupPort=4446, timeToLive=32"/> And a listener like this: <cacheManagerPeerListenerFactory class="net.sf.ehcache.distribution.RMICacheManagerPeerListenerFactory" properties="hostName=localhost, port=40001, socketTimeoutMillis=2000"/> My questions are: Are the multicast IP address and port arbitrary (I know the address has to live within a specific range but do they have to be specific numbers)? Do they need to be set up in some way by our system administrator (I am on an office network)? I want to test it locally so am running two separate tomcat instances with the above config. What do I need to change in each one? I know both the listeners can't listen on the same port - but what about the provider? Also, are the listener ports arbitrary too? I've tried setting it up as above but in my testing the caches don't appear to be replicated - the value added in one tomcat's cache is not present in the other cache. Is there anything I can do to debug this situation (other than packet sniffing)? Thanks in advance for any help, been tearing my hair out over this one!

    Read the article

  • SqlBulkCopy slow as molasses

    - by Chris
    I'm looking for the fastest way to load bulk data via c#. I have this script that does the job but slow. I read testimonies that SqlBulkCopy is the fastest. 1000 records 2.5 seconds. files contain anywhere near 5000 records to 250k What are some of the things that can slow it down? Table Def: CREATE TABLE [dbo].[tempDispositions]( [QuotaGroup] [varchar](100) NULL, [Country] [varchar](50) NULL, [ServiceGroup] [varchar](50) NULL, [Language] [varchar](50) NULL, [ContactChannel] [varchar](10) NULL, [TrackingID] [varchar](20) NULL, [CaseClosedDate] [varchar](25) NULL, [MSFTRep] [varchar](50) NULL, [CustEmail] [varchar](100) NULL, [CustPhone] [varchar](100) NULL, [CustomerName] [nvarchar](100) NULL, [ProductFamily] [varchar](35) NULL, [ProductSubType] [varchar](255) NULL, [CandidateReceivedDate] [varchar](25) NULL, [SurveyMode] [varchar](1) NULL, [SurveyWaveStartDate] [varchar](25) NULL, [SurveyInvitationDate] [varchar](25) NULL, [SurveyReminderDate] [varchar](25) NULL, [SurveyCompleteDate] [varchar](25) NULL, [OptOutDate] [varchar](25) NULL, [SurveyWaveEndDate] [varchar](25) NULL, [DispositionCode] [varchar](5) NULL, [SurveyName] [varchar](20) NULL, [SurveyVendor] [varchar](20) NULL, [BusinessUnitName] [varchar](25) NULL, [UploadId] [int] NULL, [LineNumber] [int] NULL, [BusinessUnitSubgroup] [varchar](25) NULL, [FileDate] [datetime] NULL ) ON [PRIMARY] and here's the code private void BulkLoadContent(DataTable dt) { OnMessage("Bulk loading records to temp table"); OnSubMessage("Bulk Load Started"); using (SqlBulkCopy bcp = new SqlBulkCopy(conn)) { bcp.DestinationTableName = "dbo.tempDispositions"; bcp.BulkCopyTimeout = 0; foreach (DataColumn dc in dt.Columns) { bcp.ColumnMappings.Add(dc.ColumnName, dc.ColumnName); } bcp.NotifyAfter = 2000; bcp.SqlRowsCopied += new SqlRowsCopiedEventHandler(bcp_SqlRowsCopied); bcp.WriteToServer(dt); bcp.Close(); } }

    Read the article

  • Very interesting problem in Compact Framework

    - by Alexander
    Hi, i have a performance problem while inserting data to sqlce.I'm reading string and making inserts to My tables.In LU_MAM table,i insert 1000 records withing 8 seconds.After Mam tables i make some inserts but my largest table is CR_MUS.When i want to insert record into CR_MUS,it takes too much time.CR_MUS has 2000 records and insert takes 35 seconds.What can be reason?I use same logic in my insert functions.Do u have any idea?I use VS 2008 sp1. Dim reader As StringReader reader = New StringReader(data) cn = New SqlCeConnection(General.ConnString) cn.Open() If myTransfer.ClearTables(cn, cmd) = True Then progress = 0 '------------------------------------------ cmd = New SqlServerCe.SqlCeCommand Dim rs As SqlCeResultSet cmd.Connection = cn cmd.CommandType = CommandType.TableDirect Dim rec As SqlCeUpdatableRecord ' name of table While reader.Peek > -1 If strerr_col = "" Then satir = reader.ReadLine() ayrac = Split(satir, "|") If ayrac(0).ToString() = "LC" Then prgsbar.Maximum = Convert.ToInt32(ayrac(1)) ElseIf ayrac(0).ToString = "PPAR" Then . If ayrac(2).ToString <> General.PMVer Then ShowWaitCursor(False) txtDurum.Text = "Wrong Version" Exit Sub End If If p_POCKET_PARAMETERS = True Then cmd.CommandText = "POCKET_PARAMETERS" txtDurum.Text = "POCKET_PARAMETERS" rs = cmd.ExecuteResultSet(ResultSetOptions.Updatable) rec = rs.CreateRecord() p_POCKET_PARAMETERS = False End If strerr_col = myVERI_AL.POCKET_PARAMETERS_I(ayrac, cmd, rs, rec) prgsbar.Value += 1 ElseIf ayrac(0).ToString() = "MAM" Then If p_LU_MAM = True Then txtDurum.Text = "LU_MAM " cmd.CommandText = "LU_MAM" rs = cmd.ExecuteResultSet(ResultSetOptions.Updatable) rec = rs.CreateRecord() p_LU_MAM = False End If strerr_col = myVERI_AL.LU_MAM_I(ayrac, cmd, rs, rec) prgsbar.Value += 1 ElseIf ayrac(0).ToString = "KMUS" Then If p_CR_MUS = True Then cmd.CommandText = "CR_MUS" txtDurum.Text = "CR_MUS" rs = cmd.ExecuteResultSet(ResultSetOptions.Updatable) rec = rs.CreateRecord() p_TR_KAMPANYA_MALZEME = False End If strerr_col = myVERI_AL.CR_MUS_I(ayrac, cmd, rs, rec) prgsbar.Value += 1 end while Public Function CR_KAMPANYA_MUSTERI_I(ByVal f_Line() As String, ByRef myComm As SqlCeCommand, ByRef rs As SqlCeResultSet, ByRef rec As SqlCeUpdatableRecord) As String Try rec.SetValue(0, If(f_Line(1) = String.Empty, DirectCast(DBNull.Value, Object), f_Line(1))) rec.SetValue(1, If(f_Line(2) = String.Empty, DirectCast(DBNull.Value, Object), f_Line(2))) rec.SetValue(2, If(f_Line(3) = String.Empty, DirectCast(DBNull.Value, Object), f_Line(3))) rec.SetValue(3, If(f_Line(5) = String.Empty, DirectCast(DBNull.Value, Object), f_Line(5))) rec.SetValue(4, If(f_Line(6) = String.Empty, DirectCast(DBNull.Value, Object), f_Line(6))) rs.Insert(rec) Catch ex As Exception strerr_col = ex.Message End Try Return strerr_col End Function

    Read the article

  • How to setup Lucene search for a B2B web app?

    - by Bill Paetzke
    Given: 5000 databases (spread out over 5 servers) 1 database per client (so you can infer there are 1000 clients) 2 to 2000 users per client (let's say avg is 100 users per client) Clients (databases) come and go every day (let's assume most remain for at least one year) Let's stay agnostic of language or sql brand, since Lucene (and Solr) have a breadth of support The Question: How would you setup Lucene search so that each client can only search within its database? How would you setup the index(es)? Would you need to add a filter to all search queries? If a client cancelled, how would you delete their (part of the) index? (this may be trivial--not sure yet) Possible Solutions: Make an index for each client (database) Pro: Search is faster (than one-index-for-all method). Indices are relative to the size of the client's data. Con: I'm not sure what this entails, nor do I know if this is beyond Lucene's scope. Have a single, gigantic index with a database_name field. Always include database_name as a filter. Pro: Not sure. Maybe good for tech support or billing dept to search all databases for info. Con: Search is slower (than index-per-client method). Flawed security if query filter removed. For Example: Joel Spolsky said in Podcast #11 that his hosted web app product, FogBugz On-Demand, uses Lucene. He has thousands of on-demand clients. And each client gets their own database. His situation is quite similar to mine. Although, he didn't elaborate on the setup (particularly indices); hence, the need for this question. One last thing: I would also accept an answer that uses Solr (the extension of Lucene). Perhaps it's better suited for this problem. Not sure.

    Read the article

  • PHP Hashtable array optimisation.

    - by hiprakhar
    I made a PHP app which was taking about ~0.0070sec for execution. Now, I added a hashtable array with about 2000 values. Suddenly the time for execution has gone up to ~0.0700 secs. Almost 10 times the previous value. I tried commenting out the part where I was searching inside the hashtable array (but array was still left defined). Still, the execution time remains about ~0.0500secs. Array is something like: $subjectinfo = array( 'TPT753' => 'Industrial Training', 'TPT801' => 'High Polymeric Engineering', 'TPT802' => 'Corrosion Engineering', 'TPT803' => 'Decorative ,Industrial And High Performance Coatings', 'TPT851' => 'Project'); Is there any way to optimize this part? I cannot use Database as I am running this app on Google app engine which is still not supporting JDO database for php. Some more code from the app: function getsubjectinfo($name) { $subjectinfo = array( 'TPT753' => 'Industrial Training', 'TPT801' => 'High Polymeric Engineering', 'TPT802' => 'Corrosion Engineering', 'TPT803' => 'Decorative ,Industrial And High Performance Coatings', 'TPT851' => 'Project'); $name = str_replace("-", "", $name); $name = str_replace(" ", "", $name); if (isset($subjectinfo["$name"])) return "(".$subjectinfo["$name"].")"; else return ""; } Then I am using the following statement 2-3 times in the app: echo $key." ".$this->getsubjectinfo($key)

    Read the article

  • Python halts while iteratively processing my 1GB csv file

    - by Dan
    I have two files: metadata.csv: contains an ID, followed by vendor name, a filename, etc hashes.csv: contains an ID, followed by a hash The ID is essentially a foreign key of sorts, relating file metadata to its hash. I wrote this script to quickly extract out all hashes associated with a particular vendor. It craps out before it finishes processing hashes.csv stored_ids = [] # this file is about 1 MB entries = csv.reader(open(options.entries, "rb")) for row in entries: # row[2] is the vendor if row[2] == options.vendor: # row[0] is the ID stored_ids.append(row[0]) # this file is 1 GB hashes = open(options.hashes, "rb") # I iteratively read the file here, # just in case the csv module doesn't do this. for line in hashes: # not sure if stored_ids contains strings or ints here... # this probably isn't the problem though if line.split(",")[0] in stored_ids: # if its one of the IDs we're looking for, print the file and hash to STDOUT print "%s,%s" % (line.split(",")[2], line.split(",")[4]) hashes.close() This script gets about 2000 entries through hashes.csv before it halts. What am I doing wrong? I thought I was processing it line by line. ps. the csv files are the popular HashKeeper format and the files I am parsing are the NSRL hash sets. http://www.nsrl.nist.gov/Downloads.htm#converter UPDATE: working solution below. Thanks everyone who commented! entries = csv.reader(open(options.entries, "rb")) stored_ids = dict((row[0],1) for row in entries if row[2] == options.vendor) hashes = csv.reader(open(options.hashes, "rb")) matches = dict((row[2], row[4]) for row in hashes if row[0] in stored_ids) for k, v in matches.iteritems(): print "%s,%s" % (k, v)

    Read the article

  • Web Hosting URL Length Limit?

    - by Isaac Waller
    Hello, I am designing a web application which is a tie in to my iPhone application. It sends massively large URLs to the web server (15000 about.) I was using NearlyFreeSpeech.net, but they only support URLS up to 2000 characters. I was wondering if anybody knows of web hosting that will support really large URLs? Thanks, Isaac Edit: My program needs to open a picture in Safari. I could do this 2 ways: send it base64 encoded in the URL and just echo the query parameters. first POST it to the server in my application, then the server would send back a unique ID after storing the photo in a database, which I would append to a URL which I would open in Safari which retrieved the photo from the database and delete it from the database. You see, I am lazy, and I know Mobile Safari can support URI's up to 80 000 characters, so I think this is a OK way to do it. If there is something really wrong with this, please tell me. Edit: I ended up doing it the proper POST way. Thanks.

    Read the article

< Previous Page | 73 74 75 76 77 78 79 80 81 82 83 84  | Next Page >