Search Results

Search found 4384 results on 176 pages for '1000'.

Page 3/176 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • String.Format an integer to use 1000's separator without leading 0 for small integers

    - by Kragen
    Silly question, I want to format an integer so that it appears with the 1000's separator (,), but also without decimal places and without a leading 0. My attempts so far have been: String.Format("{0} {1}", 5, 5000); // 5 5000 String.Format("{0:n} {1:n}", 5, 5000); // 5.00 5,000.00 String.Format("{0:0,0} {1:0,0}", 5, 5000); // 05 5,000 The output I'm after is: 5 5,000 Is there something obvious that I'm missing?

    Read the article

  • Outlook Crashes with Event ID 1000

    - by Deepak N
    We deployed a VSTO addin for outlook 2003. After installing the addin one of the user's outlook crashes when a contact is opened, i.e, when ItemProperties of the contact are read.Some theses properties are read using MAPI.There is no exception being logged even though we have try/catch and logging in all methods. The event log has following message The description for Event ID 1000 from source Microsoft Office 11 cannot be found Source : Microsoft Office 11 The following information was included with the event: outlook.exe 11.0.8312.0 4a403990 msvcr80.dll 8.0.50727.3053 4889d619 0 0001500a

    Read the article

  • Sorting 1000-2000 elements with many cache misses

    - by Soylent Graham
    I have an array of 1000-2000 elements which are pointers to objects. I want to keep my array sorted and obviously I want to do this as quick as possible. They are sorted by a member and not allocated contiguously so assume a cache miss whenever I access the sort-by member. Currently I'm sorting on-demand rather than on-add, but because of the cache misses and [presumably] non-inlining of the member access the inner loop of my quick sort is slow. I'm doing tests and trying things now, (and see what the actual bottleneck is) but can anyone recommend a good alternative to speeding this up? Should I do an insert-sort instead of quicksorting on-demand, or should I try and change my model to make the elements contigious and reduce cache misses? OR, is there a sort algorithm I've not come accross which is good for data that is going to cache miss?

    Read the article

  • How to run stored procedure 1000 times

    - by subt13
    I have a stored procedure that I'm using to populate a table with about 60 columns. I have genereated 1000 exec statements that look like this: exec PopulateCVCSTAdvancement 174, 213, 1, 0, 7365 exec PopulateCVCSTAdvancement 174, 214, 1, 0, 7365 exec PopulateCVCSTAdvancement 175, 213, 0, 0, 7365 Each time the stored procedure will be inserting anywhere from 1 to 3,000 records (usually around 2,000 records). The "server" is running desktop hardware with 4 gigs of available memory on a server OS. The problem I have is that after the first 10-15 executes of an average of 1-2 seconds each time, the next 10-15 seem to never finish. Am I doing this correctly? How should I do this? Thanks! Top 10 waiters: LAZYWRITER_SLEEP SQLTRACE_INCREMENTAL_FLUSH_SLEEP REQUEST_FOR_DEADLOCK_SEARCH XE_TIMER_EVENT FT_IFTS_SCHEDULER_IDLE_WAIT CHECKPOINT_QUEUE LOGMGR_QUEUE SLEEP_TASK BROKER_TO_FLUSH BROKER_TASK_STOP

    Read the article

  • The LoadLibraryA method returns error code 1114 (ERROR_DLL_INIT_FAILED) after more than 1000 cycles

    - by Javier
    Hi, I'm programing on C++, I'm using Visual Studio 2008, Windows XP, and I have the following problem: My application, that is a DLL that can be used from Python, loads an external dll, uses the required methods, and then unloads this external Dll. It's working properly, but after more than 1000 cycles the method "LoadLibraryA" returns a NULL reference. The main steps are: HINSTANCE h = NULL; h = LoadLibraryA(dllfile.c_str()); DWORD dw = GetLastError(); The error got is: ERROR_DLL_INIT_FAILED 1114 (0x45A) A dynamic link library (DLL) initialization routine failed. The Dll is unloaded by using the following: FreeLibrary(mDLL); mDLL = NULL; Where mDLL is defined like this: HINSTANCE mDLL; First alternative tried: Just load the Dll only once, and unloaded it when the application ends. This fix the problem but introduces a new one. When the application ends, instead of first executing the DllMain method of my applicaion, wich unloads the external DLL, is executing first the DllMain method of the other Dll. This cause the following error because my application is trying to unload a Dll that was unload by itself previously. "Unhandled exception at 0x04a00d07 (DllName.DLL) in Python.exe: 0xC0000005: Access violation reading location 0x0000006b". Any suggestion will be welcomed. Thanks in advance. Regards.

    Read the article

  • jCarousel jQuery ajax loading 1000 records

    - by user1714862
    I'm using jCarousel to present a vertical scolling list of +-1000 names. I am using ajax to load the data 100 records at a time then when all the data has loaded I just let the jCarousel loop in the DOM. I have the ajax and loop all working but would like to make the code work no matter how large the total record count becomes. 1) I'd like to eliminate the 1201 fixed number and use a variable. 2) I currently loop on every record I see (carousel.first) to see if it matches my reload position(s) (albeit the loop is ony 12x it still seems a little "loopy") Any suggestions on improving this? function mycarousel_itemLoadCallback(carousel, state) { //if (carousel.has(carousel.first, carousel.last)) { //return; //} var getCount = 100; // Number of records to grab at a time var maxCount = 1201; // total possible number of records var visible = 9; // the number of records you can see in the window so this creates a pre-load by this number of records for (var i = 1; i < maxCount; i+=getCount ) { if (carousel.first === 1 || carousel.first === (i-visible)){ var getFrom = i; var getTo = getFrom+(getCount-1); //alert('TOP Record ='+carousel.first+'\n Now GET '+getFrom+'-'+getTo); jQuery.get('#ajaxscript#', { first: getFrom, last: getTo }, function(xml) { mycarousel_itemAddCallback(carousel, getFrom, getTo, xml); }, 'xml' ); break; } } };

    Read the article

  • A SelfHosted WCF Service over Basic HTTP Binding doesn't support more than 1000 concurrent requests

    - by Krishnan
    I have self hosted a WCF Service over BasicHttpBinding consumed by an ASMX Client. I'm simulating a concurrent user load of 1200 users. The service method takes a string parameter and returns a string. The data exchanged is less than 10KB. The processing time for a request is fixed at 2 seconds by having a Thread.Sleep(2000) statement. Nothing additional. I have removed all the DB Hits / business logic. The same piece of code runs fine for 1000 concurrent users. I get the following error when I bump up the number to 1200 users. System.Net.WebException: The underlying connection was closed: An unexpected error occurred on a receive. ---> System.IO.IOException: Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host. ---> System.Net.Sockets.SocketException: An existing connection was forcibly closed by the remote host at System.Net.Sockets.Socket.Receive(Byte[] buffer, Int32 offset, Int32 size, SocketFlags socketFlags) at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size) --- End of inner exception stack trace --- at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.PooledStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.Connection.SyncRead(HttpWebRequest request, Boolean userRetrievedStream, Boolean probeRead) --- End of inner exception stack trace --- at System.Web.Services.Protocols.WebClientProtocol.GetWebResponse(WebRequest request) at System.Web.Services.Protocols.HttpWebClientProtocol.GetWebResponse(WebRequest request) at System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(String methodName, Object[] parameters) at WCF.Throttling.Client.Service.Function2(String param) This exception is often reported on DataContract mismatch and large data exchange. But never when doing a load test. I have browsed enough and have tried most of the options which include, Enabled Trace & Message log on server side. But no errors logged. To overcome Port Exhaustion MaxUserPort is set to 65535, and TcpTimedWaitDelay 30 secs. MaxConcurrent Calls is set to 600, and MaxConcurrentInstances is set to 1200. The Open, Close, Send and Receive Timeouts are set to 10 Minutes. The HTTPWebRequest KeepAlive set to false. I have not been able to nail down the issue for the past two days. Any help would be appreciated. Thank you.

    Read the article

  • how to fetch more than 1000 entities NON keybased?

    - by user291071
    If I should be approaching this problem through a different method, please suggest so. I am creating an item based collaborative filter. I populate the db with the LinkRating2 class and for each link there are more than a 1000 users that I need to call and collect their ratings to perform calculations which I then use to create another table. So I need to call more than 1000 entities for a given link. For instance lets say there are over a 1000 users rated 'link1' there will be over a 1000 instances of this class for the given link property that I need to call. How would I complete this example? class LinkRating2(db.Model): user = db.StringProperty() link = db.StringProperty() rating2 = db.FloatProperty() query =LinkRating2.all() link1 = 'link string name' a = query.filter('link = ', link1) aa = a.fetch(1000)##how would i get more than 1000 for a given link1 as shown? ##keybased over 1000 in other post example i need method for a subset though not key class MyModel(db.Expando): @classmethod def count_all(cls): """ Count *all* of the rows (without maxing out at 1000) """ count = 0 query = cls.all().order('__key__') while count % 1000 == 0: current_count = query.count() if current_count == 0: break count += current_count if current_count == 1000: last_key = query.fetch(1, 999)[0].key() query = query.filter('__key__ > ', last_key) return count

    Read the article

  • How get file names using OpenFileDialog in .NET (1000+ file multiselect)

    - by Cole
    Maybe some of you have come across this before.... I am opening files for parsing. I'm using OpenFileDialog, of course, but i'm limited to a buffer of 2048 on the .FileNames string. Thus, I can only select a few hundred files. This is OK for most cases. However, fore example, I have in one case 1400 files to open. Do you know a way to do this with the open file dialog. I just want the string array of .FileNames, I pass that to parser class. I was also thinking of offering a FolderBrowserDialog option and then I'd use some other method to just loop through all the files in a directory, like the DirectoryInfo class. I'd do this as a last resort if I can't have an all in one solution.

    Read the article

  • How to put 1000 lightweight server applications in the cloud

    - by Dan Bird
    The company I work for sells a commercial desktop/server app that runs on any non dedicated Windows PC or server and uses Tomcat for all interactions with the application. Customers are asking that we host their instance of the application so they don't have to run it locally on their own servers. The app is lightweight and an average server, in theory, could handle 25-50 instances before users would notice a slowdown. However only 1 instance can run per Windows instance (because the application writes to a common registry branch) so we'd need something like VMWare to create 25-50 Windows instances. We know we eventually need to reprogram to make it truly cloud-worthy but what would you recommend for a server farm or whatever for this? We don't have the setup to purchase our own servers so we must use a 3rd party. We have budgeted $500 - $1000 per year per customer for this service. Thanks in advance for your suggestions, experiences and guidance.

    Read the article

  • SQL: Using a CASE Statement to update 1000 rows at once

    - by SoLoGHoST
    Ok, I would like to use a CASE STATEMENT for this, but I am lost with this. Basically, I need to update a ton of rows, but just on the "position" column. I need to update all "position" values from 0 - count(position) for each id_layout_position column per id_layout column. OK, here is a pic of what the table looks like: Now let's say I delete the circled row, this will remove position = 2 and give me: 0, 1, 3, 5, 6, 7, and 4. But I want to add something at the end now and make sure that it has the last possible position, but the positions are already messed up, so I need to reorder them like so before I insert the new row: 0, 1, 2, 3, 4, 5, 6. But it must be ordered by lowest first. So 0 stays at 0, 1 stays at 1, 3 gets changed to 2, the 4 at the end gets changed to a 3, 5 gets changed to 4, 6 gets changed to 5, and 7 gets changed to 6. Hopefully you guys get the picture now. I'm completely lost here. Also, note, this table is tiny compared to how fast it can grow in size, so it needs to be able to do this FAST, thus I was thinking on the CASE STATEMENT for an UPDATE QUERY. Here's what I got for a regular update, but I don't wanna throw this into a foreach loop, as it would take forever to do it. I'm using SMF (Simple Machines Forums), so it might look a little different, but the idea is the same, and CASE statements are supported... $smcFunc['db_query']('', ' UPDATE {db_prefix}dp_positions SET position = {int:position} WHERE id_layout_position = {int:id_layout_position} AND id_layout = {int:id_layout}', array( 'position' => $position++, 'id_layout_position' => (int) $id_layout_position, 'id_layout' => (int) $id_layout, ) ); Anyways, I need to apply some sort of CASE on this so that I can auto-increment by 1 all values that it finds and update to the next possible value. I know I'm doing this wrong, even in this QUERY. But I'm totally lost when it comes to CASES. Here's an example of a CASE being used within SMF, so you can see this and hopefully relate: $conditions = ''; foreach ($postgroups as $id => $min_posts) { $conditions .= ' WHEN posts >= ' . $min_posts . (!empty($lastMin) ? ' AND posts <= ' . $lastMin : '') . ' THEN ' . $id; $lastMin = $min_posts; } // A big fat CASE WHEN... END is faster than a zillion UPDATE's ;). $smcFunc['db_query']('', ' UPDATE {db_prefix}members SET id_post_group = CASE ' . $conditions . ' ELSE 0 END' . ($parameter1 != null ? ' WHERE ' . (is_array($parameter1) ? 'id_member IN ({array_int:members})' : 'id_member = {int:members}') : ''), array( 'members' => $parameter1, ) ); Before I do the update, I actually have a SELECT which throws everything I need into arrays like so: $disabled_sections = array(); $positions = array(); while ($row = $smcFunc['db_fetch_assoc']($request)) { if (!isset($disabled_sections[$row['id_group']][$row['id_layout']])) $disabled_sections[$row['id_group']][$row['id_layout']] = array( 'info' => $module_info[$name], 'id_layout_position' => $row['id_layout_position'] ); // Increment the positions... if (!is_null($row['position'])) { if (!isset($positions[$row['id_layout']][$row['id_layout_position']])) $positions[$row['id_layout']][$row['id_layout_position']] = 1; else $positions[$row['id_layout']][$row['id_layout_position']]++; } else $positions[$row['id_layout']][$row['id_layout_position']] = 0; } Thanks, I know if anyone can help me here it's definitely you guys and gals... Anyways, here is my question: How do I use a CASE statement in the first code example, so that I can update all of the rows in the position column from 0 - total # of rows found, that have that id_layout value and that id_layout_position value, and continue this for all different id_layout values in that table? Can I use the arrays above somehow? I'm sure I'll have to use the id_layout and id_layout_position values for this right? But how can I do this? Ok, guy, I get an error, saying "Hacking Attempt" with the following code: // Updating all positions in here. $smcFunc['db_query']('', ' SET @pos = 0; UPDATE {db_prefix}dp_positions SET position=@pos:=@pos+1 ORDER BY id_layout_position, position', array( ) ); Am I doing something wrong? Perhaps SMF has safeguards against this approach?? Perhaps I need to use a CASE STATEMENT instead?

    Read the article

  • Troisième trimestre au delà des prévisions chez Google, son action dépasse les 1000 dollars pour la première fois de son histoire

    Troisième trimestre au delà des prévisions chez Google, son action dépasse les 1000 dollars pour la première fois de son histoire Google a publié des résultats supérieurs à ses prévisions pour son troisième trimestre. Pour la première fois de son histoire, l'action Google est passé au-dessus des 1000 dollars ce vendredi 18 octobre. Rappelons qu'en 2004, l'action valait 100 dollars. De plus, son bénéfice net a augmenté de 36% et la firme enregistre 2,97 milliards de dollars contre 2,18 milliards...

    Read the article

  • Intel Centrino Wireless-N 1000 Again ! Ubuntu 13.04 x64

    - by vafa
    First I have to say that I tried everything written about this concept. The problem is that it stops working randomly in 3 main forms : 1 - sometimes it disconnect from wireless network and reconnect automatically 2 - sometimes it disconnect and wont connect no matter what (needs reboot) 3 - some times it's still connected but cannot ping or surf or whatever. I already tried disabling N mod using these commands : sudo modprobe -r iwlwifi modprobe iwlwifi 11n_disable=1 (or 0, whatever) it didn't help . these are the results of lspci, sudo lshw -C network, ifconfig, iwconfig, rfkill list when it disconnected and didn't connect till reboot : ifconfig : eth0 Link encap:Ethernet HWaddr c8:0a:a9:34:65:77 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:1563213476557380 errors:9379306629148050 dropped:3126435543049350 overruns:1563217771524675 frame:7816088857623375 TX packets:1563217771524675 errors:6252871086098700 dropped:0 overruns:1563217771524675 carrier:3126435543049350 collisions:7816088857623375 txqueuelen:1000 RX bytes:1563217771524675 (1.5 PB) TX bytes:1563217771524675 (1.5 PB) ham0 Link encap:Ethernet HWaddr 7a:79:19:a5:e4:93 inet addr:25.165.228.147 Bcast:25.255.255.255 Mask:255.0.0.0 inet6 addr: fe80::7879:19ff:fea5:e493/64 Scope:Link inet6 addr: 2620:9b::19a5:e493/96 Scope:Global UP BROADCAST RUNNING MULTICAST MTU:1404 Metric:1 RX packets:7743 errors:0 dropped:0 overruns:0 frame:0 TX packets:1250 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:665642 (665.6 KB) TX bytes:204056 (204.0 KB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:41138 errors:0 dropped:0 overruns:0 frame:0 TX packets:41138 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:6420962 (6.4 MB) TX bytes:6420962 (6.4 MB) wlan0 Link encap:Ethernet HWaddr 00:1e:64:45:fb:70 inet6 addr: fe80::21e:64ff:fe45:fb70/64 Scope:Link UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:286999 errors:0 dropped:0 overruns:0 frame:0 TX packets:226966 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:324386887 (324.3 MB) TX bytes:30674804 (30.6 MB) iwconfig : ham0 no wireless extensions. eth0 no wireless extensions. lo no wireless extensions. wlan0 IEEE 802.11bg ESSID:off/any Mode:Managed Access Point: Not-Associated Tx-Power=14 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off sudo lshw -C network: *-network description: Wireless interface product: Centrino Wireless-N 1000 [Condor Peak] vendor: Intel Corporation physical id: 0 bus info: pci@0000:07:00.0 logical name: wlan0 version: 00 serial: 00:1e:64:45:fb:70 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=iwlwifi driverversion=3.8.0-30-generic firmware=39.31.5.1 build 35138 latency=0 link=no multicast=yes wireless=IEEE 802.11bg resources: irq:46 memory:c0400000-c0401fff *-network description: Ethernet interface product: AR8131 Gigabit Ethernet vendor: Qualcomm Atheros physical id: 0 bus info: pci@0000:09:00.0 logical name: eth0 version: c0 serial: c8:0a:a9:34:65:77 capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress vpd cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=atl1c driverversion=1.0.1.1-NAPI latency=0 link=no multicast=yes port=twisted pair resources: irq:47 memory:c0900000-c093ffff ioport:5000(size=128) *-network description: Ethernet interface physical id: 2 logical name: ham0 serial: 7a:79:19:a5:e4:93 size: 10Mbit/s capabilities: ethernet physical configuration: autonegotiation=off broadcast=yes driver=tun driverversion=1.6 duplex=full ip=25.165.228.147 link=yes multicast=yes port=twisted pair speed=10Mbit/s lspci: 00:00.0 Host bridge: Intel Corporation Mobile 4 Series Chipset Memory Controller Hub (rev 07) 00:01.0 PCI bridge: Intel Corporation Mobile 4 Series Chipset PCI Express Graphics Port (rev 07) 00:1a.0 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #4 (rev 03) 00:1a.1 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #5 (rev 03) 00:1a.7 USB controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #2 (rev 03) 00:1b.0 Audio device: Intel Corporation 82801I (ICH9 Family) HD Audio Controller (rev 03) 00:1c.0 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 1 (rev 03) 00:1c.3 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 4 (rev 03) 00:1c.5 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 6 (rev 03) 00:1d.0 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #1 (rev 03) 00:1d.1 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #2 (rev 03) 00:1d.2 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #3 (rev 03) 00:1d.3 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #6 (rev 03) 00:1d.7 USB controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #1 (rev 03) 00:1e.0 PCI bridge: Intel Corporation 82801 Mobile PCI Bridge (rev 93) 00:1f.0 ISA bridge: Intel Corporation ICH9M LPC Interface Controller (rev 03) 00:1f.2 SATA controller: Intel Corporation 82801IBM/IEM (ICH9M/ICH9M-E) 4 port SATA Controller [AHCI mode] (rev 03) 00:1f.3 SMBus: Intel Corporation 82801I (ICH9 Family) SMBus Controller (rev 03) 01:00.0 VGA compatible controller: NVIDIA Corporation G98M [GeForce G 105M] (rev a1) 07:00.0 Network controller: Intel Corporation Centrino Wireless-N 1000 [Condor Peak] 09:00.0 Ethernet controller: Qualcomm Atheros AR8131 Gigabit Ethernet (rev c0) rfkill list : 1: acer-wireless: Wireless LAN Soft blocked: no Hard blocked: no 2: acer-bluetooth: Bluetooth Soft blocked: yes Hard blocked: no 9: phy0: Wireless LAN Soft blocked: no Hard blocked: no any help will be REALLLYYYY appreciated

    Read the article

  • Why is the file /var/crash/_usr_lib_empathy_empathy-chat.1000.upload empty?

    - by user43816
    I just experienced an unusual crash: I tried to click on a name entry in the contact list of Empathy. A crash happened. Usually I am asked then if I'd like to report the error to Launchpad, and Launchpad opens. This time the error message was: "Excuse me. Ubuntu 12.04 noticed an internal error. If you notice further problems try to restart your computer. Send an error report to Launchpad to help removing this problem?" This time Launchpad did not open automatically. A new window opened and I could read the relevant error report. Later I found the error report in the file /var/crash/_usr_lib_empathy_empathy-chat.1000.crash. Why went the course of action this time differently from other crashes? Why is the file /var/crash/_usr_lib_empathy_empathy-chat.1000.upload empty?

    Read the article

  • svnstat script

    - by Kyle Hodgson
    So I'm building out a shell script to check out all of our relevant svn repositories for analysis in svnstat. I've gotten all of this to work manually, now I'm writing up a bash script in cygwin on my Vista laptop, as I intend to move this to a Linux server at some point. Edit: I gave up on this and wrote a simple .bat script. I'll figure out the Linux deployment some other way. Edit: added the sleep 30 and svn log commands. I can tell now, with the svn log command, that it's not getting to the svn log ... this time, it did Applications, and ran the log, and then check out Database, and froze. I'll put the sleep 30 before and after the log this time. co2.sh #!/bin/bash function checkout { mkdir $1 svn checkout svn://dev-server/$1 $1 svn log --verbose --xml >> svn.log $1 sleep 30 } cd /cygdrive/c/Users/My\ User/Documents/Repos/wc checkout Applications checkout Database checkout WebServer/www.mysite.com checkout WebServer/anotherhost.mysite.com checkout WebServer/AnotherApp checkout WebServer/thirdhost.mysite.com checkout WebServer/fourthhost.mysite.com checkout WebServer/WebServices It works, for the most part - but for some reason it has a tendency to stop working after a few repositories, usually right after finishing a repository before going to the next one. When it fails, it will not recover on its own. I've tried commenting out the svn line, it goes in and creates all the directories just fine when I do that - so its not that. I'm looking for direction as well as direct advice. Cygwin has been very stable for me, but I did start using the native rxvt instead of "bash in a cmd.exe window" recently. I don't think that's the problem, as I've left top on remote systems running all night and rxvt didn't seem to mind. Also I haven't done any bash scripting in cygwin so I suppose this might not be recommended; though I can't see why not. I don't want all of WebServer, hence me only checking out certain folders like that. What I suspect is that something is hanging up the svn checkout. Any ideas here? Edit: this time when I hit ctrl+z to cancel out, I forgot I was on Windows and typed ps to see if the job was still running; and as you can see there are lots of svn processes hanging around... strange. Kyle Hodgson@KyleHodgson-PC ~/winUser/Documents/Repos $ Kyle Hodgson@KyleHodgson-PC ~/winUser/Documents/Repos $ jobs [1]- Stopped bash co2.sh [2]+ Stopped ./co2.sh Kyle Hodgson@KyleHodgson-PC ~/winUser/Documents/Repos $ kill %1 [1]- Stopped bash co2.sh Kyle Hodgson@KyleHodgson-PC ~/winUser/Documents/Repos $ [1]- Terminated bash co2.sh Kyle Hodgson@KyleHodgson-PC ~/winUser/Documents/Repos $ Kyle Hodgson@KyleHodgson-PC ~/winUser/Documents/Repos $ Kyle Hodgson@KyleHodgson-PC ~/winUser/Documents/Repos $ ps PID PPID PGID WINPID TTY UID STIME COMMAND 7872 1 7872 2340 0 1000 Jun 29 /usr/bin/svn 7752 1 6140 7828 1 1000 Jun 29 /usr/bin/svn 6192 1 5044 2192 1 1000 Jun 30 /usr/bin/svn 7292 1 7452 1796 1 1000 Jun 30 /usr/bin/svn 6236 1 7304 7468 2 1000 Jul 2 /usr/bin/svn 1564 1 5032 7144 2 1000 Jul 2 /usr/bin/svn 9072 1 3960 6276 3 1000 Jul 3 /usr/bin/svn 5876 1 5876 5876 con 1000 11:22:10 /usr/bin/rxvt 924 5876 924 10192 4 1000 11:22:10 /usr/bin/bash 7212 1 7332 5584 4 1000 13:17:54 /usr/bin/svn 9412 1 5480 8840 4 1000 15:38:16 /usr/bin/svn S 8128 924 8128 9452 4 1000 17:38:05 /usr/bin/bash 9132 8128 8128 8172 4 1000 17:43:25 /usr/bin/svn 3512 1 3512 3512 con 1000 17:43:50 /usr/bin/rxvt I 10200 3512 10200 6616 5 1000 17:43:51 /usr/bin/bash 9732 1 9732 9732 con 1000 17:45:55 /usr/bin/rxvt 3148 9732 3148 8976 6 1000 17:45:55 /usr/bin/bash 5856 3148 5856 876 6 1000 17:51:00 /usr/bin/vim 7736 924 7736 8036 4 1000 17:53:26 /usr/bin/ps Kyle Hodgson@KyleHodgson-PC ~/winUser/Documents/Repos $ jobs [2]+ Stopped ./co2.sh Kyle Hodgson@KyleHodgson-PC ~/winUser/Documents/Repos $ Here's an strace on the PID of the hung svn program, it's been like this for hours. Looks like its just doing nothing. I keep suspecting that some interruption on the server is causing this; does svn have a locking mechanism I'm not aware of? Kyle Hodgson@KyleHodgson-PC ~/winUser/Documents/Repos $ strace -p 7304 ********************************************** Program name: C:\cygwin\bin\svn.exe (pid 7304, ppid 6408) App version: 1005.25, api: 0.156 DLL version: 1005.25, api: 0.156 DLL build: 2008-06-12 19:34 OS version: Windows NT-6.0 Heap size: 402653184 Date/Time: 2009-07-06 18:20:11 **********************************************

    Read the article

  • A Big Data korszakban, túl az 1000. eladott Oracle Exadata Database Machine adatbázisgépen

    - by user645740
    Mint azt már egy ideje a szél is fújja, beköszöntött a BIG DATA korszak, azaz egyre több adat gyulik, egyre több adattal gazdálkodunk. A hatalmas mennyiségu adat jó részét Oracle adatbázisokban tárolják. Mi is futtathatná jobban, gyorsabban és hetékonyabban ezeket az Oracle adatbázisokat, mint az Oracle stratégiai high-end megoldása az Oracle Exadata Database Machine? Rengeteg forrása van a sok adatnak, néhány példa, ahol a növekedés óriási: kommunikációs adatok, CDR-ek banki és kormányzati tranzakciók hely információk spatial, location, GPS,..., mint ahogyan a közelmúltban az egyes telefonokkal ésoperációs rendszerekkel kapcsolatos "ügyekben" is olvashattuk, e-mail-ek, közösségi site-ok, intelligens méromuszerek, háztartási berendezések, .... Milyen ütemben no az Exadata értékesítés? Nos az Exadata 2008 oszén lett bejelentve. Az Oracle pénzügyi év végén a jelentésben azt olvashatjuk, hogy az Exadata páratlanul sikeres megoldás, már több mint 1000 Exadatát vásároltak meg az Oracle ügyfelek, mondta Mark Hurd, az Oracle alelnöke:   “In addition to record setting software sales, our Exadata and Exalogic systems also made a strong contribution to our growth in Q4,” said Oracle President, Mark Hurd. “Today there are more than 1,000 Exadata machines installed worldwide. Our goal is to triple that number in FY12.” Larry Ellison, az Oracle elso embere, azt nyilatkozta, hogy mind a felho - cloud computing, mind a memória-adatbázisok területén egyre gyorsabban növekszik az Oracle:   “In FY11 Oracle’s database business experienced its fastest growth in a decade,” said Oracle CEO, Larry Ellison. “Over the past few years we added features to the Oracle database for both cloud computing and in-memory databases that led to increased database sales this past year. Lately we’ve been focused on the big business opportunity presented by Big Data.” A Big Data korszakban  megtakarításokat érhetünk el az Exadatával, tekintse meg a következo videót, de óvatosan, mert gondolkodásra késztet:    -   Oracle Exadata: Are You Ready?.

    Read the article

  • Move 53,800+ files into 54 separate folders with ~1000 files each?

    - by ane
    Trying to import 53,800+ individual files (messages) using Gmail's POP fetcher. Gmail understandably refuses, giving the error: "Too many messages to download. There are too many messages on the other server." The folder in question looks like similar to: /usr/home/customer/Maildir/cur/1203672790.V57I586f04M867101.mail.net:2,S /usr/home/customer/Maildir/cur/1203676329.V57I586f22M520117.mail.net:2,S /usr/home/customer/Maildir/cur/1203677194.V57I586f26M688004.mail.net:2,S /usr/home/customer/Maildir/cur/1203679158.V57I586f2bM182864.mail.net:2,S /usr/home/customer/Maildir/cur/1203680493.V57I586f33M740378.mail.net:2,S /usr/home/customer/Maildir/cur/1203685837.V57I586f0bM835200.mail.net:2,S /usr/home/customer/Maildir/cur/1203687920.V57I586f65M995884.mail.net:2,S ... Using the shell (tcsh, sh, etc. on FreeBSD), what one-line command can I type to split this directory full of files into separate folders so Gmail only sees 1000 messages at a time? Something with find or ls | xargs mv maybe. Whatever is fastest. The desired output directory would now look something like: /usr/home/customer/Maildir/cur/1203672790.V57I586f04M867101.mail.net:2,S /usr/home/customer/Maildir/cur/1203676329.V57I586f22M520117.mail.net:2,S ... /usr/home/customer/set1/ (contains messages 1-1000) /usr/home/customer/set2/ (contains messages 1001-2000) /usr/home/customer/set3/ (etc.) Ideally, cron could run another command to automatically reverse the process in 1000 message increments every hour. So Gmail only sees & downloads 1000 at a time.

    Read the article

  • Ubuntu 12.4.1 failing in vm both Vbox and Vmware on new HP Envy 4t-1000

    - by Chas
    Brand new to Linux, getting frustrated trying to get an environment up with Ubuntu. My primary goal is to learn Linux and Apache/PHP development. I need to keep my Windows OS as main on my machine for work, so i'm trying to virtualize Ubuntu 12.4.1 without luck (many attempts). I have a new HP Envy 4t-1000 with 16gb ram, and 32 gb ssd caching with 500gb spindle hard drive. Graphics card is an Intel HD 3000 with AMD Radeon 7670M. With installing Ubuntu desktop in VBox, I'm getting this result: https://forums.virtualbox.org/viewtopic.php?f=6&t=51939 With VMware workstation 7 (patched), I complete the install of Ubuntu, it reboots, purple desktop briefly flashes then it drops to command line. I bought a beginning Ubuntu book, and it recommends trying to manually configure graphics if this happens. So I tried doing a safe boot holding shift - I get to the first screen (GRUB) loads fine, and I choose recovery mode. After choosing the recovery mode, I get the recovery mode options, and can arrow down to what the book suggests 'Run in fail safe graphic mode.' Once I select this option, I get a black screen with a large white dialogue box, at the top it says "The system is running in low-graphics mode. Your screen, graphics card, and input device settings could not be detected correctly. You will need to configure these yourself." Then there is an ok button way down at the bottom. When I select 'ok' I get a menu for a few options, book recommended 'reconfigure graphics.' When I try this, I get a menu of two options: 1) "Use generic (default) configuration or 2) use backup. I've tried both options several times, hitting ok just refreshes screen and nothing more. Rebooting at this point just goes back to command line as before. I don't know what to do at this point, I've spent too many hours this weekend trying in both VBox and VMware to get Ubuntu going. Isn't there like a very basic graphic display or something I can use to at least get into the desktop? I explored the GRUB some more, and tried to look at the startup and xserver logs - both are blank. No help there I guess? When I try to choose 'Edit the configuration file, then 'ok' screen just refreshes on same menu options, nothing happens. thx for any advice. I really need to focus on learning Linux, Apache and PHP, so perhaps Ubuntu just won't work on my hardware? Any other suggestions? I will need to virtualize - THANKS for any help/advice.

    Read the article

  • NTFS partitions mount as root instead of user as set in /etc/fstab

    - by G1bs0n
    I recently upgraded a server to Ubuntu 12.04 with a fresh install and my NTFS partitions won't mount as user at boot but I can mount them as user manually from the console with $ sudo mount -a. Using ntfsfix reports no problems and chkdisk sees nothing wrong under Windows 7. Are the drives not ready to be mounted at boot and default to root instead of user for some reason? Here is my /etc/fstab: UUID=E4E6B30CE6B2DDCC /media/Bowles ntfs-3g defaults,uid=1000,gid=1000,umask=022 0 0 UUID=A040C42340C3FDD2 /media/Burroughs ntfs-3g defaults,uid=1000,gid=1000,umask=022 0 0 UUID=EA022C73022C46C3 /media/DoctorGonzo ntfs-3g defaults,uid=1000,gid=1000,umask=022 0 0 UUID=BA425A384259FA19 /media/Geist ntfs-3g defaults,uid=1000,gid=1000,umask=022 0 0 UUID=E87CFAE57CFAAE06 /media/DouglasAdams ntfs-3g defaults,uid=1000,gid=1000,umask=022 0 0 Here is the output of ls -l after boot: drwxr-xr-x 1 xbmc xbmc 4096 Oct 31 21:46 Bowles drwxrwxr-x 1 root users 8192 Oct 31 21:46 Burroughs drwxrwxr-x 1 root users 4096 Oct 28 21:45 DoctorGonzo drwxrwxr-x 1 root users 12288 Oct 31 19:56 DouglasAdams drwxrwxr-x 1 root users 4096 Nov 3 01:03 Geist If I unmount and mount again with $ sudo mount -a from console, the output of ls -l: drwxr-xr-x 1 xbmc xbmc 4096 Oct 31 21:46 Bowles drwxr-xr-x 1 xbmc xbmc 8192 Oct 31 21:46 Burroughs drwxr-xr-x 1 xbmc xbmc 4096 Oct 28 21:45 DoctorGonzo drwxr-xr-x 1 xbmc xbmc 12288 Oct 31 19:56 DouglasAdams drwxr-xr-x 1 xbmc xbmc 4096 Nov 3 01:03 Geist Update I was fooling myself, I had a custom udev rule set up to auto-mount file systems by label for USB drives, borrowed from here, but didn't update the rule to accommodate for my additional hard drives. Updating the rule to auto-mount only drives after /dev/sde solved my problem. Thank you again for your reply cartoonist.

    Read the article

  • Remove CR LF for all lines that are not followed by a specific number

    - by Kjeldsen
    I have 14000+ lines of a database, that I want to edit with Notepad++. All these lines should start with 1000 and I therefore want to delete CR LF at the end of those lines that are not followed by 1000. Eg. 1000 16 04000 CRLF sdfsdf 15 sdf de 05550 CRLF 1000 16 04000 CRLF 1000 16 04000 CRLF 5. sdkfd dksds 16 0555 CRLF 10/10/14 sdfsdf CRLF should after find and replace look like 1000 16 04000 sdfsdf 15 sdf de 05550 CRLF 1000 16 04000 CRLF 1000 16 04000 5. sdkfd dksds 16 0555 10 sdfsdf CRLF I have tried with find what: \r\n([^1000]) Replace with _\1 However, this doesn't seem to remove lines starting with a number (like 5. or 10/10/14). Is it possible to make just one RegEx to find and remove all line breaks that isn't followed by 1000? "_" indicating a "space"

    Read the article

  • How do i insert 1000 times in one statement? with SQLITE?

    - by acidzombie24
    I want to fill this table with 10000000 values but first i want only 1000. I tried this in sqlite database browser but 3 isnt inserted unless i drop everything after it. But more importantly i dont know how to have num go from 1 to 1000. create table if not exists test1(id integer primary key, val integer); insert into test1(val) select '3' as num where num between 1 and 1000

    Read the article

  • NIC Bonding/balance-rr with Dell PowerConnect 5324

    - by Branden Martin
    I'm trying to get NIC bonding to work with balance-rr so that three NIC ports are combined, so that instead of getting 1 Gbps we get 3 Gbps. We are doing this on two servers connected to the same switch. However, we're only getting the speed of one physical link. We are using 1 Dell PowerConnect 5324, SW version 2.0.1.3, Boot version 1.0.2.02, HW version 00.00.02. Both servers are CentOS 5.9 (Final) running OnApp Hypervisor (CloudBoot) Server 1 is using ports g5-g7 in port-channel 1. Server 2 is using ports g9-g11 in port-channel 2. Switch show interface status Port Type Duplex Speed Neg ctrl State Pressure Mode -------- ------------ ------ ----- -------- ---- ----------- -------- ------- g1 1G-Copper -- -- -- -- Down -- -- g2 1G-Copper Full 1000 Enabled Off Up Disabled Off g3 1G-Copper -- -- -- -- Down -- -- g4 1G-Copper -- -- -- -- Down -- -- g5 1G-Copper Full 1000 Enabled Off Up Disabled Off g6 1G-Copper Full 1000 Enabled Off Up Disabled Off g7 1G-Copper Full 1000 Enabled Off Up Disabled On g8 1G-Copper Full 1000 Enabled Off Up Disabled Off g9 1G-Copper Full 1000 Enabled Off Up Disabled On g10 1G-Copper Full 1000 Enabled Off Up Disabled On g11 1G-Copper Full 1000 Enabled Off Up Disabled Off g12 1G-Copper Full 1000 Enabled Off Up Disabled On g13 1G-Copper -- -- -- -- Down -- -- g14 1G-Copper -- -- -- -- Down -- -- g15 1G-Copper -- -- -- -- Down -- -- g16 1G-Copper -- -- -- -- Down -- -- g17 1G-Copper -- -- -- -- Down -- -- g18 1G-Copper -- -- -- -- Down -- -- g19 1G-Copper -- -- -- -- Down -- -- g20 1G-Copper -- -- -- -- Down -- -- g21 1G-Combo-C -- -- -- -- Down -- -- g22 1G-Combo-C -- -- -- -- Down -- -- g23 1G-Combo-C -- -- -- -- Down -- -- g24 1G-Combo-C Full 100 Enabled Off Up Disabled On Flow Link Ch Type Duplex Speed Neg control State -------- ------- ------ ----- -------- ------- ----------- ch1 1G Full 1000 Enabled Off Up ch2 1G Full 1000 Enabled Off Up ch3 -- -- -- -- -- Not Present ch4 -- -- -- -- -- Not Present ch5 -- -- -- -- -- Not Present ch6 -- -- -- -- -- Not Present ch7 -- -- -- -- -- Not Present ch8 -- -- -- -- -- Not Present Server 1: cat /etc/sysconfig/network-scripts/ifcfg-eth3 DEVICE=eth3 HWADDR=00:1b:21:ac:d5:55 USERCTL=no BOOTPROTO=none ONBOOT=yes MASTER=onappstorebond SLAVE=yes cat /etc/sysconfig/network-scripts/ifcfg-eth4 DEVICE=eth4 HWADDR=68:05:ca:18:28:ae USERCTL=no BOOTPROTO=none ONBOOT=yes MASTER=onappstorebond SLAVE=yes cat /etc/sysconfig/network-scripts/ifcfg-eth5 DEVICE=eth5 HWADDR=68:05:ca:18:28:af USERCTL=no BOOTPROTO=none ONBOOT=yes MASTER=onappstorebond SLAVE=yes cat /etc/sysconfig/network-scripts/ifcfg-onappstorebond DEVICE=onappstorebond IPADDR=10.200.52.1 NETMASK=255.255.0.0 GATEWAY=10.200.2.254 NETWORK=10.200.0.0 USERCTL=no BOOTPROTO=none ONBOOT=yes cat /proc/net/bonding/onappstorebond Ethernet Channel Bonding Driver: v3.4.0-1 (October 7, 2008) Bonding Mode: load balancing (round-robin) MII Status: up MII Polling Interval (ms): 100 Up Delay (ms): 0 Down Delay (ms): 0 Slave Interface: eth3 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 00:1b:21:ac:d5:55 Slave Interface: eth4 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 68:05:ca:18:28:ae Slave Interface: eth5 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 68:05:ca:18:28:af Server 2: cat /etc/sysconfig/network-scripts/ifcfg-eth3 DEVICE=eth3 HWADDR=00:1b:21:ac:d5:a7 USERCTL=no BOOTPROTO=none ONBOOT=yes MASTER=onappstorebond SLAVE=yes cat /etc/sysconfig/network-scripts/ifcfg-eth4 DEVICE=eth4 HWADDR=68:05:ca:18:30:30 USERCTL=no BOOTPROTO=none ONBOOT=yes MASTER=onappstorebond SLAVE=yes cat /etc/sysconfig/network-scripts/ifcfg-eth5 DEVICE=eth5 HWADDR=68:05:ca:18:30:31 USERCTL=no BOOTPROTO=none ONBOOT=yes MASTER=onappstorebond SLAVE=yes cat /etc/sysconfig/network-scripts/ifcfg-onappstorebond DEVICE=onappstorebond IPADDR=10.200.53.1 NETMASK=255.255.0.0 GATEWAY=10.200.3.254 NETWORK=10.200.0.0 USERCTL=no BOOTPROTO=none ONBOOT=yes cat /proc/net/bonding/onappstorebond Ethernet Channel Bonding Driver: v3.4.0-1 (October 7, 2008) Bonding Mode: load balancing (round-robin) MII Status: up MII Polling Interval (ms): 100 Up Delay (ms): 0 Down Delay (ms): 0 Slave Interface: eth3 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 00:1b:21:ac:d5:a7 Slave Interface: eth4 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 68:05:ca:18:30:30 Slave Interface: eth5 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 68:05:ca:18:30:31 Here are the results of iperf. ------------------------------------------------------------ Client connecting to 10.200.52.1, TCP port 5001 TCP window size: 27.7 KByte (default) ------------------------------------------------------------ [ 3] local 10.200.3.254 port 53766 connected with 10.200.52.1 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 950 MBytes 794 Mbits/sec

    Read the article

  • Site Search Engine for 1,000 page website

    - by Ian
    I manage a website with about 1,000 articles that need to be searchable by my members. The site search engines I've tried all had their own problems: Fluid Dynamics Search Engine Since it's written in perl, it was a bit hacky to integrate with my PHP-based CMS. I basically had to file_get_contents the search results page. However, FDSE had the best search results. Google CSE Ugh, the search results SUCK. It can't find documents even using unique strings. I'm so surprised that a Google search product is this bad. Nor can I get any answers on their 'help' forums, and I am a paying user. Boo, Google. Boo. Sphider Again, bad search results. Unable to locate some phrases used in link text. Better results than Google CSE though. Shame on Google that a free PHP script has better search results than their paid application. IndexTank This one looked really promising. I got all set up with their PHP API client. But it would only randomly add articles that I submitted. Out of 700+ articles I pushed to the index through their API, only 8 made it in. Unable to find any help on this subject. Update for IndexTank -- Got the above issue fixed, so this looks most promising so far. The site itself runs on php/mysql and FreeBSD, though this shouldn't matter for a web crawling indexer. I've looked at Lucene, but I don't know anything about Java or installing Java programs on my web server. I also do not have root access on my web server, if this would be required for installation. I really don't need a lot of fancy features. It just needs to be able to crawl my web site and return great (even decent!) search results. I don't need any crazy search operators. It doesn't need to index off my primary domain. It just needs to work! Thanks, Hive Mind!

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >