Search Results

Search found 2077 results on 84 pages for 'night coder'.

Page 78/84 | < Previous Page | 74 75 76 77 78 79 80 81 82 83 84  | Next Page >

  • Is it normal for a programmer with 2 years experience to take a long time to code simple programs?

    - by ajax81
    Hi all, I'm a relatively new programmer (18 months on the scene), and I'm finally getting to the point where I'm comfortable accepting projects and developing solutions under minimal supervision. Unfortunately, this also means that I've become acutely aware of my performance shortfalls, the most prevalent of which is the amount of time it takes me to develop, test, and submit algorithms for review. A great example of what I'm talking about occurred this week when I was tasked with developing a simple XML web service (asp.net 3.5) callable via client-side JavaScript, that accepts a single parameter and returns a dataset output to a modal window (please note this is the first time I've had to develop a web service and have had ZERO experience creating/consuming them...let alone calling them from JS client side). Keeping a long story short -- I worked on it for 4 days straight, all day each day, for a grand total of 36 hours, not including the time I spent dwelling on the problem in the shower, the morning commute, and laying awake in bed at night. I learned a great deal about web services and xml/json/javascript...but was called in for a management review to discuss the length of time it took me to develop the solution. In the meeting, I was praised for the quality of my work and was in fact told that my effort was commendable. However, they (senior leads and pm's) weren't impressed with the amount of time it took me to develop the solution and expressed that they would have liked to see the solution in roughly 1/3 of the time it took me. I guess what concerns me the most is that I've identified this pattern as common for myself. Between online videos, book research, and trial/error coding...if its something I haven't seen before, I can spend up to two weeks on a problem that seems to only take the pros in the videos moments to code up. And of course, knowing that management isn't happy with this pattern has shaken me up a bit. To sum up, I have some very specific questions I'd like to ask, and would greatly appreciate your objective professional feedback. Is my experience as a junior programmer common among new developers? Or is it possible that I'm just not cut out for the work? If you suspect that my experience is not common and that there may be an aptitude issue, do you have any suggestions/solutions that I could propose to management to help bring me up to speed? Do seasoned, professional programmers ever encounter knowledge barriers that considerably delay deliverables? When you started out in the industry, did you know how to "do it all"? If not, how long did it take you to be perceived as "proficient"? Was it a natural progression of trial and error, or was there a particular zen moment when you knew you had achieved super saiyen power level? Anyways, thanks for taking the time to read my question(s). I don't know if this is the right place to ask for professional career guidance, but I greatly appreciate your willingness to help me out. Cheers, Daniel

    Read the article

  • Google maps API : V2 : Custom infowindow with bindInfoWindowHtml

    - by PlanetUnknown
    The API documentation gave me hopes last night with "bindInfoWindowHtml". But it doesn't seem to replace the default infoWindow, even when you provide your own class etc. I have tried using other ideas like the labeledmarker. But it doesn't support draggable markers. Hence can't use it in my application. Here is the sample code which shows the info. window inside, the original bubble. Isn't there a way to override that window as well ! ` <style type="text/css"> .infoWindowCustomClass { width: 500px; height: 500px; background-color: #CAEE96; color: #666; } </style> <meta http-equiv="content-type" content="text/html; charset=utf-8"/> <title>Google Maps JavaScript API Example</title> <script src="http://maps.google.com/maps?file=api&amp;v=2&amp;sensor=false&amp;key="" type="text/javascript"></script> <script type="text/javascript"> function load() { if (GBrowserIsCompatible()) { // Create our "tiny" marker icon var blueIcon = new GIcon(G_DEFAULT_ICON); blueIcon.image = "http://www.google.com/intl/en_us/mapfiles/ms/micons/blue-dot.png"; // Set up our GMarkerOptions object markerOptions = { icon:blueIcon }; var map = new GMap2(document.getElementById("map")); map.setCenter(new GLatLng(33.968064,-83.377047), 13); markerOptions.title = "fart"; var point = new GLatLng(33.968064,-83.377047); var marker = new GMarker(point); var tempName = document.getElementById("infoWindowCustom"); marker.bindInfoWindowHtml(tempName); map.addOverlay(marker); } } </script>` And here is the DIV - <DIV id="infoWindowCustom" name="infoWindowCustom" class="infoWindowCustomClass"> Name : <TEXTAREA NAME="nameID" ID="nameID" ROWS="2" COLS="25"></TEXTAREA> Comments : <TEXTAREA NAME="commentsID" ID="commentsID" ROWS="4" COLS="25"></TEXTAREA> </DIV>

    Read the article

  • Return pre-UPDATE column values in PostgreSQL without using triggers, functions or other "magic"

    - by Python Larry
    I have a related question, but this is another part of MY puzzle. I would like to get the OLD VALUE of a Column from a Row that was UPDATEd... WITHOUT using Triggers (nor Stored Procedures, nor any other extra, non-SQL/-query entities). The query I have is like this: UPDATE my_table SET processing_by = our_id_info -- unique to this instance WHERE trans_nbr IN ( SELECT trans_nbr FROM my_table GROUP BY trans_nbr HAVING COUNT(trans_nbr) > 1 LIMIT our_limit_to_have_single_process_grab ) RETURNING row_id If I could do "FOR UPDATE ON my_table" at the end of the subquery, that'd be devine (and fix my other question/problem). But, that won't work: can't have this AND a "GROUP BY" (which is necessary for figuring out the COUNT of trans_nbr's). Then I could just take those trans_nbr's and do a query first to get the (soon-to-be-) former processing_by values. I've tried doing like: UPDATE my_table SET processing_by = our_id_info -- unique to this instance FROM my_table old_my_table JOIN ( SELECT trans_nbr FROM my_table GROUP BY trans_nbr HAVING COUNT(trans_nbr) > 1 LIMIT our_limit_to_have_single_process_grab ) sub_my_table ON old_my_table.trans_nbr = sub_my_table.trans_nbr WHERE my_table.trans_nbr = sub_my_table.trans_nbr AND my_table.processing_by = old_my_table.processing_by RETURNING my_table.row_id, my_table.processing_by, old_my_table.processing_by But that can't work; "old_my_table" is not viewable outside of the join; the RETURNING clause is blind to it. I've long since lost count of all the attempts I've made; I have been researching this for literally hours. If I could just find a bullet-proof way to lock the rows in my subquery - and ONLY those rows, and WHEN the subquery happens - all the concurrency issues I'm trying to avoid disappear... UPDATE: [WIPES EGG OFF FACE] Okay, so I had a typo in the non-generic code of the above that I wrote "doesn't work"; it does... thanks to Erwin Brandstetter, below, who stated it would, I re-did it (after a night's sleep, refreshed eyes, and a banana for bfast). Since it took me so long/hard to find this sort of solution, perhaps my embarrassment is worth it? At least this is on SO for posterity now... : What I now have (that works) is like this: UPDATE my_table SET processing_by = our_id_info -- unique to this instance FROM my_table AS old_my_table WHERE trans_nbr IN ( SELECT trans_nbr FROM my_table GROUP BY trans_nbr HAVING COUNT(*) > 1 LIMIT our_limit_to_have_single_process_grab ) AND my_table.row_id = old_my_table.row_id RETURNING my_table.row_id, my_table.processing_by, old_my_table.processing_by AS old_processing_by The COUNT(*) is per a suggestion from Flimzy in a comment on my other (linked above) question. (I was more specific than necessary. [In this instance.])

    Read the article

  • How to break a Hibernate session?

    - by Péter Török
    In the Hibernate reference, it is stated several times that All exceptions thrown by Hibernate are fatal. This means you have to roll back the database transaction and close the current Session. You aren’t allowed to continue working with a Session that threw an exception. One of our legacy apps uses a single session to update/insert many records from files into a DB table. Each recourd update/insert is done in a separate transaction, which is then duly committed (or rolled back in case an error occurred). Then for the next record a new transaction is opened etc. But the same session is used throughout the whole process, even if a HibernateException was caught in the middle. We are using Oracle 9i btw with Hibernate 3.24.sp1 on JBoss 4.2. Reading the above in the book, I realized that this design may fail. So I refactored the app to use a separate session for each record update. In a unit test with a mock session factory, I could prove that it is now requesting a new session for each record update. So far, so good. However, we found no way to reproduce the session failure while testing the whole app (would this be a stress test btw, or ...?). We thought of shutting down the listener of the DB but we realized that the app is keeping a bunch of connections open to the DB, and the listener would not affect those connections. (This is a web app, activated once every night by a scheduler, but it can also be activated via the browser.) Then we tried to kill some of those connections in the DB while the app was processing updates - this resulted in some failed updates, but then the app happily continued. Apparently Hibernate is clever enough to reopen broken connections under the hood without breaking the whole session. So this might not be a critical issue, as our app seems to be robust enough even in its original form. However, the issue keeps bugging me. I would like to know: Under what circumstances does the Hibernate session really become unusable after a HibernateException was thrown? How to reproduce this in a test? (What's the proper term for such a test?)

    Read the article

  • Searches (and general querying) with HBase and/or Cassandra (best practices?)

    - by alexeypro
    I have User model object with quite few fields (properties, if you wish) in it. Say "firstname", "lastname", "city" and "year-of-birth". Each user also gets "unique id". I want to be able to search by them. How do I do that properly? How to do that at all? My understanding (will work for pretty much any key-value storage -- first goes key, then value) u:123456789 = serialized_json_object ("u" as a simple prefix for user's keys, 123456789 is "unique id"). Now, thinking that I want to be able to search by firstname and lastname, I can save in: f:Steve = u:384734807,u:2398248764,u:23276263 f:Alex = u:12324355,u:121324334 so key is "f" - which is prefix for firstnames, and "Steve" is actual firstname. For "u:Steve" we save as value all user id's who are "Steve's". That makes every search very-very easy. Querying by few fields (properties) -- say by firstname (i.e. "Steve") and lastname (i.e. "l:Anything") is still easy - first get list of user ids from "f:Steve", then list from "l:Anything", find crossing user ids, an here you go. Problems (and there are quite a few): Saving, updating, deleting user is a pain. It has to be atomic and consistent operation. Also, if we have size of value limited to some value - then we are in (potential) trouble. And really not of an answer here. Only zipping the list of user ids? Not too cool, though. What id we want to add new field to search by. Eventually. Say by "city". We certainly can do the same way "c:Los Angeles" = ..., "c:Chicago" = ..., but if we didn't foresee all those "search choices" from the very beginning, then we will have to be able to create some night job or something to go by all existing User records and update those "c:CITY" for them... Quite a big job! Problems with locking. User "u:123" updates his name "Alex", and user "u:456" updates his name "Alex". They both have to update "f:Alex" with their id's. That means either we get into overwriting problem, or one update will wait for another (and imaging if there are many of them?!). What's the best way of doing that? Keeping in mind that I want to search by many fields? P.S. Please, the question is about HBase/Cassandra/NoSQL/Key-Value storages. Please please - no advices to use MySQL and "read about" SELECTs; and worry about scaling problems "later". There is a reason why I asked MY question exactly the way I did. :-)

    Read the article

  • Can GPU capabilities impact virtual machine performance?

    - by Dave White
    While this many not seem like a programming question directly, it impacts my development activities and so it seems like it belongs here. It seems that more and more developers are turning to virtual environments for development activities on their computers, SharePoint development being a prime example. Also, as a trainer, I have virtual training environments for all of the classes that I teach. I recently purchased a new Dell E6510 to travel around with. It has the i7 620M (Dual core, HyperThreaded cpu running at 2.66GHz) and 8 GB of memory. Reading the spec sheet, it sounded like it would be a great laptop to carry around and run virtual machines on. Getting the laptop though, I've been pretty disappointed with the user experience of developing in a virtual machine. Giving the Virtual Machine 4 GB of memory, it was slow and I could type complete sentences and watch the VM "catchup". My company has training laptops that we provide for our classes. They are Dell Precision M6400 Intel Core 2 Duo P8700 running at 2.54Ghz with 8 GB of memory and the experience on this laptops is night and day compared to the E6510. They are crisp and you barely aware that you are running in a virtual environment. Since the E6510 should be faster in all categories than the M6400, I couldn't understand why the new laptop was slower, so I did a component by component comparison and the only place where the E6510 is less performant than the M6400 is the graphics department. The M6400 is running a nVidia FX 2700m GPU and the E6510 is running a nVidia 3100M GPU. Looking at benchmarks of the two GPUs suggest that the FX 2700M is twice as fast as the 3100M. http://www.notebookcheck.net/Mobile-Graphics-Cards-Benchmark-List.844.0.html 3100M = 111th (E6510) FX 2700m = 47th (Precision M6400) Radeon HD 5870 = 8th (Alienware) The host OS is Windows 7 64bit as is the guest OS, running in Virtual Box 3.1.8 with Guest Additions installed on the guest. The IDE being used in the virtual environment is VS 2010 Premium. So after that long setup, my question is: Is the GPU significantly impacting the virtual machine's performance or are there other factors that I'm not looking at that I can use to boost the vm's performance? Do we now have to consider GPU performance when purchasing laptops where we expect to use virtualized development environments? Thanks in advance. Cheers, Dave

    Read the article

  • Looking for Fiddler2 help. connection to gateway refused? Just got rid of a virus

    - by John Mackey
    I use Fiddler2 for facebook game items, and it's been a great success. I accessed a website to download some dat files I needed. I think it was eshare, ziddu or megaupload, one of those. Anyway, even before the rar file had downloaded, I got this weird green shield in the bottom right hand corner of my computer. It said a Trojan was trying to access my computer, or something to that extent. It prompted me to click the shield to begin anti-virus scanning. It turns out this rogue program is called Antivirus System Pro and is pretty hard to get rid of. After discovering the rogue program, I tried using Fiddler and got the following error: [Fiddler] Connection to Gateway failed.Exception Text: No connection could be made because the target machine actively refused it 127.0.0.1:5555 I ended up purchasing SpyDoctor + Antivirus, which I'm told is designed specifically for getting rid of these types of programs. Anyway, I did a quick-scan last night with spydoctor and malware bytes. Malware picked up 2 files, and Spydoctor found 4. Most were insignificant, but it did find a worm called Worm.Alcra.F, which was labeled high-priority. I don’t know if that’s the Anti-Virus Pro or not, but SpyDoctor said it got rid of all of those successfully. I tried to run Fiddler again before leaving home, but was still getting the "gateway failed" error. Im using the newest version of firefox. When I initially set up the Fiddler 2.2.8.6, I couldn’t get it to run at first, so I found this faq on the internet that said I needed to go through ToolsOptionsSettings and set up an HTTP Proxy to 127.0.0.1 and my Port to 8888. Once I set that up and downloaded this fiddler helper as a firefox add-on, it worked fine. When I turn on fiddler, it automatically takes my proxy setting from no proxy (default) to the 127.0.0.1 with Port 8888 set up. It worked fine until my computer detected this virus. Anyway, hopefully I've given you sufficient information to offer me your best advice here. Like I said, Spydoctor says the bad stuff is gone, so maybe the rogue program made some type of change in my fiddler that I could just reset or uncheck or something like that? Or will I need to completely remove fiddler and those dat files and rar files I downloaded? Any help would be greatly appreciated. Thanks for your time.

    Read the article

  • Python: Script works, but seems to deadlock after some time

    - by sberry2A
    I have the following script, which is working for the most part Link to PasteBin The script's job is to start a number of threads which in turn each start a subprocess with Popen. The output from each subprocess is as follows: 1 2 3 . . . n Done Bascially the subprocess is transferring 10M records from tables in one database to different tables in another db with a lot of data massaging/manipulation in between because of the different schemas. If the subprocess fails at any time in it's execution (bad records, duplicate primary keys, etc), or it completes successfully, it will output "Done\n". If there are no more records to select against for transfer then it will output "NO DATA\n" My intent was to create my script "tableTransfer.py" which would spawn a number of these processes, read their output, and in turn output information such as number of updates completed, time remaining, time elapsed, and number of transfers per second. I started running the process last night and checked in this morning to see it had deadlocked. There were not subprocceses running, there are still records to be updated, and the script had not exited. It was simply sitting there, no longer outputting the current information because no subprocces were running to update the total number complete which is what controls updates to the output. This is running on OS X. I am looking for three things: I would like to get rid of the possibility of this deadlock occurring so I don't need to check in on it as frequently. Is there some issue with locking? Am I doing this in a bad way (gThreading variable to control looping of spawning additional thread... etc.) I would appreciate some suggestions for improving my overall methodology. How should I handle ctrl-c exit? Right now I need to kill the process, but assume I should be able to use the signal module or other to catch the signal and kill the threads, is that right? I am not sure whether I should be pasting my entire script here, since I usually just paste snippets. Let me know if I should paste it here as well.

    Read the article

  • Many-To-Many Query with Linq-To-NHibernate

    - by rjygraham
    Ok guys (and gals), this one has been driving me nuts all night and I'm turning to your collective wisdom for help. I'm using Fluent Nhibernate and Linq-To-NHibernate as my data access story and I have the following simplified DB structure: CREATE TABLE [dbo].[Classes]( [Id] [bigint] IDENTITY(1,1) NOT NULL, [Name] [nvarchar](100) NOT NULL, [StartDate] [datetime2](7) NOT NULL, [EndDate] [datetime2](7) NOT NULL, CONSTRAINT [PK_Classes] PRIMARY KEY CLUSTERED ( [Id] ASC ) CREATE TABLE [dbo].[Sections]( [Id] [bigint] IDENTITY(1,1) NOT NULL, [ClassId] [bigint] NOT NULL, [InternalCode] [varchar](10) NOT NULL, CONSTRAINT [PK_Sections] PRIMARY KEY CLUSTERED ( [Id] ASC ) CREATE TABLE [dbo].[SectionStudents]( [SectionId] [bigint] NOT NULL, [UserId] [uniqueidentifier] NOT NULL, CONSTRAINT [PK_SectionStudents] PRIMARY KEY CLUSTERED ( [SectionId] ASC, [UserId] ASC ) CREATE TABLE [dbo].[aspnet_Users]( [ApplicationId] [uniqueidentifier] NOT NULL, [UserId] [uniqueidentifier] NOT NULL, [UserName] [nvarchar](256) NOT NULL, [LoweredUserName] [nvarchar](256) NOT NULL, [MobileAlias] [nvarchar](16) NULL, [IsAnonymous] [bit] NOT NULL, [LastActivityDate] [datetime] NOT NULL, PRIMARY KEY NONCLUSTERED ( [UserId] ASC ) I omitted the foreign keys for brevity, but essentially this boils down to: A Class can have many Sections. A Section can belong to only 1 Class but can have many Students. A Student (aspnet_Users) can belong to many Sections. I've setup the corresponding Model classes and Fluent NHibernate Mapping classes, all that is working fine. Here's where I'm getting stuck. I need to write a query which will return the sections a student is enrolled in based on the student's UserId and the dates of the class. Here's what I've tried so far: 1. var sections = (from s in this.Session.Linq<Sections>() where s.Class.StartDate <= DateTime.UtcNow && s.Class.EndDate > DateTime.UtcNow && s.Students.First(f => f.UserId == userId) != null select s); 2. var sections = (from s in this.Session.Linq<Sections>() where s.Class.StartDate <= DateTime.UtcNow && s.Class.EndDate > DateTime.UtcNow && s.Students.Where(w => w.UserId == userId).FirstOrDefault().Id == userId select s); Obviously, 2 above will fail miserably if there are no students matching userId for classes the current date between it's start and end dates...but I just wanted to try. The filters for the Class StartDate and EndDate work fine, but the many-to-many relation with Students is proving to be difficult. Everytime I try running the query I get an ArgumentNullException with the message: Value cannot be null. Parameter name: session I've considered going down the path of making the SectionStudents relation a Model class with a reference to Section and a reference to Student instead of a many-to-many. I'd like to avoid that if I can, and I'm not even sure it would work that way. Thanks in advance to anyone who can help. Ryan

    Read the article

  • Java app makes screen display unresponsive after 10 minutes of user idle time

    - by Ross
    I've written a Java app that allows users to script mouse/keyboard input (JMacro, link not important, only for the curious). I personally use the application to automate character actions in an online game overnight while I sleep. Unfortunately, I keep coming back to the computer in the morning to find it unresponsive. Upon further testing, I'm finding that my application causes the computer to become unresponsive after about 10 minutes of user idle time (even if the application itself it simulating user activity). I can't seem to pin-point the issue, so I'm hoping somebody else might have a suggestion of where to look or what might be causing the issue. The relevant symptoms and characteristics: Unresponsiveness occurs after user is idle for 10 minutes User can still move the mouse pointer around the screen Everything but the mouse appears frozen... mouse clicks have no effect and no applications update their displays, including the Windows 7 desktop I left the task manager up along the with the app overnight so I could see the last task manager image before the screen freezes... the Java app is at normal CPU/Memory usage and total CPU usage is only ~1% After moving the mouse (in other words, the user comes back from being idle), the screen image starts updating again within 30 minutes (this is very hit and miss... sometimes 10 minutes, sometimes no results after two hours) User can CTRL-ALT-DEL to get to Windows 7's CTRL-ALT-DEL screen (after a 30 second pause). User is still able to move mouse pointer, but clicking any of the button options causes the screen to appear to freeze again On some very rare occasions, the system never freezes, and I come back to it in the morning with full responsiveness The Java app automatically stops input scripting in the middle of the night, so Windows 7 detects "real" idleness and turns the monitors into Standby mode... which they successfully come out of upon manually moving the mouse in the morning when I wake up, even though the desktop display still appears frozen Given the symptoms and characteristics of the issue, it's as if the Java app is causing the desktop display of the logged in user to stop updating, including any running applications. Programming concepts and Java packages used: Multi-threading Standard out and err are rerouted to a javax.swing.JTextArea The application uses a Swing GUI awt.Robot (very heavily used) awt.PointerInfo awt.MouseInfo System Specs: Windows 7 Professional Java 1.6.0 u17 In conclusion, I should stress that I'm not looking for any specific solutions, as I'm not asking a very specific question. I'm just wondering if anybody has run into a similar problem when using the Java libraries that I'm using. I would also gladly appreciate any suggestions for things to try to attempt to further pinpoint what is causing my problem. Thanks! Ross PS, I'll post an update/answer if I manage to stumble across anything else while I continue to debug this.

    Read the article

  • Slow MySQL query....only sometimes

    - by Shane N
    I have a query that's used in a reporting system of ours that sometimes runs quicker than a second, and other times takes 1 to 10 minutes to run. Here's the entry from the slow query log: # Query_time: 543 Lock_time: 0 Rows_sent: 0 Rows_examined: 124948974 use statsdb; SELECT count(distinct Visits.visitorid) as 'uniques' FROM Visits,Visitors WHERE Visits.visitorid=Visitors.visitorid and candidateid in (32) and visittime>=1275721200 and visittime<=1275807599 and (omit=0 or omit>=1275807599) AND Visitors.segmentid=9 AND Visits.visitorid NOT IN (SELECT Visits.visitorid FROM Visits,Visitors WHERE Visits.visitorid=Visitors.visitorid and candidateid in (32) and visittime<1275721200 and (omit=0 or omit>=1275807599) AND Visitors.segmentid=9); It's basically counting unique visitors, and it's doing that by counting the visitors for today and then substracting those that have been here before. If you know of a better way to do this, let me know. I just don't understand why sometimes it can be so quick, and other times takes so long - even with the same exact query under the same server load. Here's the EXPLAIN on this query. As you can see it's using the indexes I've set up: id select_type table type possible_keys key key_len ref rows Extra 1 PRIMARY Visits range visittime_visitorid,visitorid visittime_visitorid 4 NULL 82500 Using where; Using index 1 PRIMARY Visitors eq_ref PRIMARY,cand_visitor_omit PRIMARY 8 statsdb.Visits.visitorid 1 Using where 2 DEPENDENT SUBQUERY Visits ref visittime_visitorid,visitorid visitorid 8 func 1 Using where 2 DEPENDENT SUBQUERY Visitors eq_ref PRIMARY,cand_visitor_omit PRIMARY 8 statsdb.Visits.visitorid 1 Using where I tried to optimize the query a few weeks ago and came up with a variation that consistently took about 2 seconds, but in practice it ended up taking more time since 90% of the time the old query returned much quicker. Two seconds per query is too long because we are calling the query up to 50 times per page load, with different time periods. Could the quick behavior be due to the query being saved in the query cache? I tried running 'RESET QUERY CACHE' and 'FLUSH TABLES' between my benchmark tests and I was still getting quick results most of the time. Note: last night while running the query I got an error: Unable to save result set. My initial research shows that may be due to a corrupt table that needs repair. Could this be the reason for the behavior I'm seeing? In case you want server info: Accessing via PHP 4.4.4 MySQL 4.1.22 All tables are InnoDB We run optimize table on all tables weekly The sum of both the tables used in the query is 500 MB MySQL config: key_buffer = 350M max_allowed_packet = 16M thread_stack = 128K sort_buffer = 14M read_buffer = 1M bulk_insert_buffer_size = 400M set-variable = max_connections=150 query_cache_limit = 1048576 query_cache_size = 50777216 query_cache_type = 1 tmp_table_size = 203554432 table_cache = 120 thread_cache_size = 4 wait_timeout = 28800 skip-external-locking innodb_file_per_table innodb_buffer_pool_size = 3512M innodb_log_file_size=100M innodb_log_buffer_size=4M

    Read the article

  • Expand and overlap image over a grid of other images

    - by soren121
    I've been fighting with jQuery all night and getting quite frustrated here, so forgive me if the answer was staring me in the face. I have a grid of 15 images contained within a <div>, and when I click on one of them, I want it to expand to three times its original size of 150x105 and overlap the rest. I've got the expanding down, but the overlapping isn't quite working. My current HTML: <div id="image-grid"> <img src="images/after-pictures/1.jpg" class="igc1" /> <img src="images/after-pictures/2.jpg" class="igc2" /> <img src="images/after-pictures/3.jpg" class="igc3" /> <img src="images/after-pictures/4.jpg" class="igc4" /> <img src="images/after-pictures/5.jpg" class="igc5" /> <img src="images/after-pictures/6.jpg" class="igc1" /> <img src="images/after-pictures/7.jpg" class="igc2" /> <img src="images/after-pictures/8.jpg" class="igc3" /> <img src="images/after-pictures/9.jpg" class="igc4" /> <img src="images/after-pictures/10.jpg" class="igc5" /> <img src="images/after-pictures/11.jpg" class="igc1" /> <img src="images/after-pictures/12.jpg" class="igc2" /> <img src="images/after-pictures/13.jpg" class="igc3" /> <img src="images/after-pictures/14.jpg" class="igc4" /> <img src="images/after-pictures/15.jpg" class="igc5" /> </div> The igc* class denotes the column the image is in. CSS: #image-grid { border: 1px solid #aaa; width: 745px; padding: 5px 2px 2px 5px; } #image-grid img { width: 150px; height: 105px; margin: -2px; } jQuery: $('#image-grid img').click(function() { $(this).animate({ width: '450px', height: '315px', zIndex: '10' }, 200); });

    Read the article

  • highlight query string in more than one field using solr search feature

    - by Romi
    i am using solr indexes for showing my search results. to show serch results i am parsing json data received from solr. i am able to highlight a query string in search result but only in a single field. for this i set hl=true and hl.fl="field1". i did it as $.getJSON("http://192.168.1.9:8983/solr/db/select/?wt=json&&start=0&rows=100&q="+lowerCaseQuery+"&hl=true&hl.fl=description,name&hl.usePhraseHighlighter=true&sort=price asc&json.wrf=?", function(result){ var n=result.response.numFound var highlight = new Array(n); $.each(result.highlighting, function(i, hitem){ var match = hitem.text[0].match(/<em>(.*?)<\/em>/); highlight[i]=match[1]; }); $.each(newresult.response.docs, function(i,item){ var word=highlight[item["UID_PK"]]; var result = item.text[0].replace(new RegExp(word,'g'), '<em>' + word + '</em>'); }); for this json object is as : { "responseHeader": { "status": 0, "QTime": 32 }, "response": { "numFound": 21, "start": 0, "docs": [ { "description": "The matte finish waves on this wedding band contrast with the high polish borders. This sharp and elegant design was finely crafted in Japan.", "UID_PK": "8252", }, { "description": "This elegant ring has an Akoya cultured pearl with a band of bezel-set round diamonds making it perfect for her to wear to work or the night out.", "UID_PK": "8142", }, ] }, "highlighting": { "8252": { "description": [ " and <em>elegant</em> design was finely crafted in Japan." ] }, "8142": { "description": [ "This <em>elegant</em> ring has an Akoya cultured pearl with a band of bezel-set round diamonds making" ] }, } } Now if i want to highlight query string in two fields i did as hl=true hl.fl=descrption, name my json is as: { "responseHeader":{ "status":0, "QTime":16 }, "response":{ "numFound":1904, "start":0, "docs":[ { "description":"", "UID_PK":"7780", "name":[ "Diamond bracelet with Milgrain Bezel1" ] }, { "description":"This pendant is sure to win hearts. Round diamonds form a simple and graceful line.", "UID_PK":"8121", "name":[ "Heartline Diamond Pendant" ] }, "highlighting":{ "7780":{ "name":[ "<em>Diamond</em> bracelet with Milgrain Bezel1" ] }, "8121":{ "description":[ "This pendant is sure to win hearts. Round <em>diamonds</em> form a simple and graceful line." ], "name":[ "Heartline <em>Diamond</em> Pendant" ] } } } Now how should i parse it to get the result. suggest me some general technique, so if i want to highlight query in more fields then i could do so. Thanks

    Read the article

  • What are some things you'd like fresh college grads to know?

    - by bradhe
    So I proposed this to the Reddit community and I'd like to get SO's perspective on this. This is pretty much the copypasta of what I put there. I was thinking about this last night and thought it would be neat to compile a list. I'm still a pretty fresh college grad -- been in industry for 2 years -- but I think that I might have a few interesting things to lend. You don't know as much as you think you do. Somehow, college students think they know a lot more than they do (or maybe that was just me). Likewise, they think they can do more than they actually can. You should fairly assess your skills. QA people are not out to get you. Humans introduce bugs to code. It's not (nescessarily) a personal reflection on you and your skills if your code has a bug and it's caught by the QA/testing team. Listen to your senior (developers). They are not actually fuddy duddies who don't know about the new L337 hax in Ruby (okay, sometimes they are, but still...). They have a wealth of knowledge that you can learn from and it's in your best interest to do so. You will most likely not be doing what you want to for a while. This is mostly true in the corporate world -- startups are a different matter. Also, this is due to more than just the economy, man! Junior devs need to earn their keep, so to speak. Everyone wants to be lead dev on the next project and there are a lot of people in line ahead of you! For every elite developer there are 100 average developers. Joel Spolsky, I'm looking at you. Somehow this concept of ninja coders has really ingrained itself in our culture. While I encourage you to be the best you can be don't be disappointed if people aren't writing blog posts about you in the near future. Anyone else have anything they would see added to this list?

    Read the article

  • Pure sine wave inverter

    - by Nick
    Not exactly programming (sorry) but I think it's pretty close and can be of interest to other programmers. I'm trying to setup a battery power station so that I can work from anywhere. I go surfing a lot and my idea is to be able to work from wherever I can park my car (given there's coverage). So, I'm getting a deep cycle battery, a 240V charger (I'm in Australia), and an inverter. At the back of my laptop it says 19V and 4.62A. From the people I've spoken to that means it consumes about 90W at most. So my inverter needs to be able to output about 100W. Most of them seem to be 200W and up so this shouldn't be a problem. I want to be able run my laptop for 10 hours (plus the 2 hours I get from the laptop battery) straight. According to the people I've spoken to and from what I gather online I need a battery that has the amp hours for my "amp draw". I have no idea how to calculate this but I've been guesstimating. Most deep cycle batteries seem to be classified using amp hours (Ah)... 35Ah, 50Ah, 75Ah, 100Ah, and so on. However the amp hours on those batteries is for a 240V and I seem to be using 19V. According to an expert I spoke to you'd need a 100Ah battery to power a 5A appliance at 240V for 10 hours (you only get about 50% useful power). That to me is 5A * 240V = 100Ah battery. So, naive as I might be I take 240V and divide that by my 19V and reach the conclusion that I can get away with a battery that's about 12 times smaller than that 100Ah. The expert told me I needed a 50Ah battery so that's probably what I'll be getting, but it would be interesting to know what I theoretically would need to power my laptop for 10 hours. As for charging the battery the expert I spoke to said I needed a 3-5A charger to be able to charge that 50Ah battery from flat to full in about 10 hours (I will just leave it plugged in over night). Now to my question. The expert said it's not a matter of "if" more like a guaranteed "when" my computer will stuff up if I don't use a "pure sine wave inverter". From what I gather the power that comes out of that battery is not as clean as the power we get in the socket at home. Apparently it's "square" and the one in the socket is nice and smooth. I've already got an inverter, but it's not "pure". Do I really need to buy the $200-300 pure sine wave inverter or can I get away with something else? Perhaps the laptop adapter that sits in the middle of my laptop power cable already fixes that wave to be nice and smooth? Thanks!

    Read the article

  • Store Observer not being called always

    - by Nixarn
    Has anyone else here experienced problems with their Store Observer class not being called always when the user for instance cancels a request (or purchases something) We just had our update that brought in app purchases go live last night, and before that we had obviously tested everything tons of times against the Sandbox and everything was working fine. Now however, when the update went live in a real environment we keep getting issues with the store. For instance, in a freshly booted iPhone / iPod, the first time you run the app, if you then try to make a purchase and then immediately cancel it from the first dialog, it seems as if the callback for the cancel is not getting called. If you then restart the app it seems as if it always works after that, or at least. Same thing with other callbacks, seems as if our store observer isn't listening as the callbacks aren't being registered on the phone. One example of this is if you purchase something, then nothing will happen (if this is the first time the app is launched at least). You get the purchase successful dialog from the app store but it seems as if our own code isn't called. If you then quit the app and restart it the callback gets called. Same problem happens if you for instance try to start a request to download all previous purchases and then immediately cancel it as the first dialog pops up, if you do that then the callback for a failed restore is not called, until you then restart the app and try it again, then it always seems to work. The way we have implemented our store observer is by creating a custom class that's implements the SKPaymentTransactionObserver interface. @interface StoreObserver : NSObject<SKPaymentTransactionObserver> In the class we have implemented the following methods: - (void)paymentQueue:(SKPaymentQueue *)queue updatedTransactions:(NSArray *)transactions - (void)paymentQueueRestoreCompletedTransactionsFinished:(SKPaymentQueue *)queue - (void)paymentQueue:(SKPaymentQueue *)queue restoreCompletedTransactionsFailedWithError:(NSError *)error The way our restore process works is that if you tap on the button that allows you to download all we simply run the restoreCompletedTransactions code as follows: [[SKPaymentQueue defaultQueue] restoreCompletedTransactions]; However, the callback, restoreCompletedTransactionsFailedWithError, which has been implemented in the store observer, does not always get called when we try to cancel the request. This happens when you boot the iPhone / iPod and try this for the first time. If you after that restart the app everything works fine. The StoreObserver class is created when our app is launched, just by running the following code: pStoreObserver = [[StoreObserver alloc] init]; [[SKPaymentQueue defaultQueue] addTransactionObserver:pStoreObserver]; Has anyone else had any similar experiences? Or does anyone have any suggestions on how to solve this? As I said, in the sandbox environment everything was working fine, no issues whatsoever, but now once it's gone live we're experiencing these.

    Read the article

  • HTML Language question

    - by Mike
    Note my code below. I am trying to figure out why my data is not changing to Spanish. I understand it to be one line of code and that is all within the HTML attribute lang=”es”. Any help would be greatly appreciated. <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xlmns="http://www.w3.org/1999/xhtml" lang=”es” xml:lang="en"> <head> <title>JavaJam Coffee House</title> <link href="javajam.css" rel="stylesheet" type="text/css" /> </head> <body bgcolor="brown"> <h1>JavaJam Coffee House</h1> <ul> <li>Specialty Coffee and Tea</li> <li>Bagels, Muffins, and Organic Snacks</li> <li>Music and Poetry Readings</li> <li>Usability Studies</li> <li>Open Mic Night</li> </ul> <br></br> <p>12312 Main Street<br> Mountain Home, CA 93923<br> 1-888-555-5555</br> </p> <p> <em> <small>Copyright &copy; 2008 JavaJam Coffee House</em></p> E-Mail <a href="mailto;[email protected]"> Michael J. Crawley</a> </body> </html>

    Read the article

  • ASP.NET MVC 3 NOT showing appropriate view, action called using jquery

    - by TunAntun
    I have a small problem. My action is : public ViewResult SearchForRooms(int HotelDDL) { List<Room> roomsInHotel = bl.ReturnRoomsPerHotel(HotelDDL); return View(roomsInHotel); } Here is the jquery that is calling the action: <script type="text/javascript" language="javascript"> $(document).ready(function () { $("#HotelDDL").change(function () { var text = $("#HotelDDL option:selected").text(); var value = $("#HotelDDL option:selected").val(); alert("Selected text=" + text + " Selected value= " + value); $.post("/Home/SearchForRooms", { HotelDDL: $("#HotelDDL option:selected").val() }); }); }); </script> And finally, here is the View that should be called: @model IEnumerable<RoomReservation.Entities.Entities.Room> @{ ViewBag.Title = "Search"; } <h2>Result</h2> <table> <tr> <th> City </th> <th> Hotel </th> <th> Room label </th> <th> Number of beds </th> <th> Price per night </th> <th></th> </tr> @foreach (var item in Model) { <tr> <td> @Html.DisplayFor(modelitem=>item.Hotel.City.Name) </td> <td> @Html.DisplayFor(modelItem => item.Hotel.Name) </td> <td> @Html.DisplayFor(modelItem => item.Name) </td> <td> @Html.DisplayFor(modelItem => item.NumberOfBeds) </td> <td> @Html.DisplayFor(modelItem => item.PricePerNight) </td> </tr> } </table> Everthing is working ok (databse return all rooms correctly) except final view rendering. I have tried Phil's tool but it doesn't give me any suspicious hints: RouteDebug.RouteDebugger.RewriteRoutesForTesting(RouteTable.Routes); So, why is it not showing after jscript send it's post method to SearchForRooms()? Thank you P.S. If you need any other piece of code please just say so.

    Read the article

  • CSS / HTML - Image will not show up

    - by weka
    Ugh, ok. I've been up all night working this thing and now an image won't show. It's so darn annoying. Trying to get this .png image to show up on a simple PHP webpage. I just wanna go to sleep X_X CSS: <style> .achievement { position:relative; width:500px; background:#B5B5B5; float:left; padding:10px; margin-bottom:10px; } .icon { float:left; width:32px; height:32px; background: url("images/trophy.php") no-repeat center; padding:05px; border:4px solid #4D4D4D; } .ptsgained { position:absolute; top:0; right:0; background:#79E310; color:#fff; font-family:Tahoma; font-weight:bold; font-size:12px; padding:5px; } .achievement h1 { color:#454545; font-size:12pt; font-family:Georgia; font-weight:none; margin:0;padding:0; } .achievement p { margin:0;padding:0; font-size:12px; font-family:Tahoma; color:#1C1C1C; } .text { margin-left:10px; float:left; } </style> HTML: <div class="achievement"> <span class="ptsgained">+10</span> <div class="icon"></div> <div class="text"><h1>All Around Submitter</h1> <p>Submit and have approved content in all 6 areas.</p> </div> </div> What am I doing wrong, guys? :\

    Read the article

  • Week in Geek: 4chan Falls Victim to DDoS Attack Edition

    - by Asian Angel
    This week we learned how to tweak the low battery action on a Windows 7 laptop, access an eBook collection anywhere in the world, “extend iPad battery life, batch resize photos, & sync massive music collections”, went on a reign of destruction with Snow Crusher, and had fun decorating our desktops with abstract icon collections. Photo by pasukaru76. Random Geek Links We have included extra news article goodness to help you catch up on any developments that you may have missed during the holiday break this past week. Note: The three 27C3 articles listed here represent three different presentations at the 27th Chaos Communication Congress hacker conference. 4chan victim of DDoS as FBI investigates role in PayPal attack Users of 4chan may have gotten a taste of their own medicine after the site was knocked offline by a DDoS attack from an unknown origin early Thursday morning. Report: FBI seizes server in probe of WikiLeaks attacks The FBI has seized a server in Texas as part of its hunt for the groups behind the pro-WikiLeaks denial-of-service attacks launched in December against PayPal, Visa, MasterCard, and others. Mozilla exposes older user-account database Mozilla has disabled 44,000 older user accounts for its Firefox add-ons site after a security researcher found part of a database of the account information on a publicly available server. Data breach affects 4.9 million Honda customers Japanese automaker Honda has put some 2.2 million customers in the United States on a security breach alert after a database containing information on the owners and their cars was hacked. Chinese Trojan discovered in Android games An Android-based Trojan called “Geinimi” has been discovered in the wild and the Trojan is capable of sending personal information to remote servers and exhibits botnet-like behavior. 27C3 presentation claims many mobiles vulnerable to SMS attacks According to security experts, an ‘SMS of death’ threatens to disable many current Sony Ericsson, Samsung, Motorola, Micromax and LG mobiles. 27C3: GSM cell phones even easier to tap Security researchers have demonstrated how open source software on a number of revamped, entry-level cell phones can decrypt and record mobile phone calls in the GSM network. 27C3: danger lurks in PDF documents Security researcher Julia Wolf has pointed out numerous, previously hardly known, security problems in connection with Adobe’s PDF standard. Critical update for WordPress A critical update has been made available for WordPress in the form of version 3.0.4. The update fixes a security bug in WordPress’s KSES library. McAfee Labs Predicts Geolocation, Mobile Devices and Apple Will Top the List of Targets for Emerging Threats in 2011 The list comprises 2010’s most buzzed about platforms and services, including Google’s Android, Apple’s iPhone, foursquare, Google TV and the Mac OS X platform, which are all expected to become major targets for cybercriminals. McAfee Labs also predicts that politically motivated attacks will be on the rise. Windows Phone 7 piracy materializes with FreeMarketplace A proof-of-concept application, FreeMarketplace, that allows any Windows Phone 7 application to be downloaded and installed free of charge has been developed. Empty email accounts, and some bad buzz for Hotmail In the past few days, a number of Hotmail users have been complaining about a rather disconcerting issue: their Hotmail accounts, some up to 10 years old, appear completely empty.  No emails, no folders, nothing, just what appears to be a new account. Reports: Nintendo warns of 3DS risk for kids Nintendo has reportedly issued a warning that the 3DS, its eagerly awaited glasses-free 3D portable gaming device, should not be used by children under 6 when the gadget is in 3D-viewing mode. Google eyes ‘cloaking’ as next antispam target Google plans to take a closer look at the practice of “cloaking,” or presenting one look to a Googlebot crawling one’s site while presenting another look to users. Facebook, Twitter stock trading drawing SEC eye? The high degree of investor interest in shares of hot Silicon Valley companies that aren’t yet publicly traded–like Facebook, Twitter, LinkedIn, and Zynga–may be leading to scrutiny from the U.S. Securities and Exchange Commission (SEC). Random TinyHacker Links Photo by jcraveiro. Exciting Software Set for Release in 2011 A few bloggers from great websites such as How-To Geek, Guiding Tech and 7 Tutorials took the time to sit down and talk about their software wishes for 2011. Take the time to read it and share… Wikileaks Infopr0n An infographic detailing the quest to plug WikiLeaks. The New York Times Guide to Mobile Apps A growing collection of all mobile app coverage by the New York Times as well as lists of favorite apps from Times writers. 7,000,000,000 (Video) A fascinating look at the world’s population via National Geographic Magazine. Super User Questions Check out the great answers to these hot questions from Super User. How to use a Personal computer as a Linux web server for development purposes? How to link processing power of old computers together? Free virtualization tool for testing suspicious files? Why do some actions not work with Remote Desktop? What is the simplest way to send a large batch of pictures to a distant friend or colleague? How-To Geek Weekly Article Recap Had a busy week and need to get caught up on your HTG reading? Then sit back and relax while enjoying these hot posts full of how-to roundup goodness. The 50 Best How-To Geek Windows Articles of 2010 The 20 Best How-To Geek Explainer Topics for 2010 The 20 Best How-To Geek Linux Articles of 2010 How to Search Just the Site You’re Viewing Using Google Search Ask the Readers: Backing Your Files Up – Local Storage versus the Cloud One Year Ago on How-To Geek Need more how-to geekiness for your weekend? Then look through this great batch of articles from one year ago that focus on dual-booting and O.S. installation goodness. Dual Boot Your Pre-Installed Windows 7 Computer with Vista Dual Boot Your Pre-Installed Windows 7 Computer with XP How To Setup a USB Flash Drive to Install Windows 7 Dual Boot Your Pre-Installed Windows 7 Computer with Ubuntu Easily Install Ubuntu Linux with Windows Using the Wubi Installer The Geek Note We hope that you and your families have had a terrific holiday break as everyone prepares to return to work and school this week. Remember to keep those great tips coming in to us at [email protected]! Photo by pjbeardsley. Latest Features How-To Geek ETC The 20 Best How-To Geek Linux Articles of 2010 The 50 Best How-To Geek Windows Articles of 2010 The 20 Best How-To Geek Explainer Topics for 2010 How to Disable Caps Lock Key in Windows 7 or Vista How to Use the Avira Rescue CD to Clean Your Infected PC The Complete List of iPad Tips, Tricks, and Tutorials Tune Pop Enhances Android Music Notifications Another Busy Night in Gotham City Wallpaper Classic Super Mario Brothers Theme for Chrome and Iron Experimental Firefox Builds Put Tabs on the Title Bar (Available for Download) Android Trojan Found in the Wild Chaos, Panic, and Disorder Wallpaper

    Read the article

  • Ajax Control Toolkit and Superexpert

    - by Stephen Walther
    Microsoft has asked my company, Superexpert Consulting, to take ownership of the development and maintenance of the Ajax Control Toolkit moving forward. In this blog entry, I discuss our strategy for improving the Ajax Control Toolkit. Why the Ajax Control Toolkit? The Ajax Control Toolkit is one of the most popular projects on CodePlex. In fact, some have argued that it is among the most successful open-source projects of all time. It consistently receives over 3,500 downloads a day (not weekends -- workdays). A mind-boggling number of developers use the Ajax Control Toolkit in their ASP.NET Web Forms applications. Why does the Ajax Control Toolkit continue to be such a popular project? The Ajax Control Toolkit fills a strong need in the ASP.NET Web Forms world. The Toolkit enables Web Forms developers to build richly interactive JavaScript applications without writing any JavaScript. For example, by taking advantage of the Ajax Control Toolkit, a Web Forms developer can add modal dialogs, popup calendars, and client tabs to a web application simply by dragging web controls onto a page. The Ajax Control Toolkit is not for everyone. If you are comfortable writing JavaScript then I recommend that you investigate using jQuery plugins instead of the Ajax Control Toolkit. However, if you are a Web Forms developer and you don’t want to get your hands dirty writing JavaScript, then the Ajax Control Toolkit is a great solution. The Ajax Control Toolkit is Vast The Ajax Control Toolkit consists of 40 controls. That’s a lot of controls (For the sake of comparison, jQuery UI consists of only 8 controls – those slackers J). Furthermore, developers expect the Ajax Control Toolkit to work on browsers both old and new. For example, people expect the Ajax Control Toolkit to work with Internet Explorer 6 and Internet Explorer 9 and every version of Internet Explorer in between. People also expect the Ajax Control Toolkit to work on the latest versions of Mozilla Firefox, Apple Safari, and Google Chrome. And, people expect the Ajax Control Toolkit to work with different operating systems. Yikes, that is a lot of combinations. The biggest challenge which my company faces in supporting the Ajax Control Toolkit is ensuring that the Ajax Control Toolkit works across all of these different browsers and operating systems. Testing, Testing, Testing Because we wanted to ensure that we could easily test the Ajax Control Toolkit with different browsers, the very first thing that we did was to set up a dedicated testing server. The dedicated server -- named Schizo -- hosts 4 virtual machines so that we can run Internet Explorer 6, Internet Explorer 7, Internet Explorer 8, and Internet Explorer 9 at the same time (We also use the virtual machines to host the latest versions of Firefox, Chrome, Opera, and Safari). The five developers on our team (plus me) can each publish to a separate FTP website on the testing server. That way, we can quickly test how changes to the Ajax Control Toolkit affect different browsers. QUnit Tests for the Ajax Control Toolkit Introducing regressions – introducing new bugs when trying to fix existing bugs – is the concern which prevents me from sleeping well at night. There are so many people using the Ajax Control Toolkit in so many unique scenarios, that it is difficult to make improvements to the Ajax Control Toolkit without introducing regressions. In order to avoid regressions, we decided early on that it was extremely important to build good test coverage for the 40 controls in the Ajax Control Toolkit. We’ve been focusing a lot of energy on building automated JavaScript unit tests which we can use to help us discover regressions. We decided to write the unit tests with the QUnit test framework. We picked QUnit because it is quickly becoming the standard unit testing framework in the JavaScript world. For example, it is the unit testing framework used by the jQuery team, the jQuery UI team, and many jQuery UI plugin developers. We had to make several enhancements to the QUnit framework in order to test the Ajax Control Toolkit. For example, QUnit does not support tests which include postbacks. We modified the QUnit framework so that it works with IFrames so we could perform postbacks in our automated tests. At this point, we have written hundreds of QUnit tests. For example, we have written 135 QUnit tests for the Accordion control. The QUnit tests are included with the Ajax Control Toolkit source code in a project named AjaxControlToolkit.Tests. You can run all of the QUnit tests contained in the project by opening the Default.aspx page. Automating the QUnit Tests across Multiple Browsers Automated tests are useless if no one ever runs them. In order for the QUnit tests to be useful, we needed an easy way to run the tests automatically against a matrix of browsers. We wanted to run the unit tests against Internet Explorer 6, Internet Explorer 7, Internet Explorer 8, Internet Explorer 9, Firefox, Chrome, and Safari automatically. Expecting a developer to run QUnit tests against every browser after every check-in is just too much to expect. It takes 20 seconds to run the Accordion QUnit tests. We are testing against 8 browsers. That would require the developer to open 8 browsers and wait for the results after each change in code. Too much work. Therefore, we built a JavaScript Test Server. Our JavaScript Test Server project was inspired by John Resig’s TestSwarm project. The JavaScript Test Server runs our QUnit tests in a swarm of browsers (running on different operating systems) automatically. Here’s how the JavaScript Test Server works: 1. We created an ASP.NET page named RunTest.aspx that constantly polls the JavaScript Test Server for a new set of QUnit tests to run. After the RunTest.aspx page runs the QUnit tests, the RunTest.aspx records the test results back to the JavaScript Test Server. 2. We opened the RunTest.aspx page on instances of Internet Explorer 6, Internet Explorer 7, Internet Explorer 8, Internet Explorer 9, FireFox, Chrome, Opera, Google, and Safari. Now that we have the JavaScript Test Server setup, we can run all of our QUnit tests against all of the browsers which we need to support with a single click of a button. A New Release of the Ajax Control Toolkit Each Month The Ajax Control Toolkit Issue Tracker contains over one thousand five hundred open issues and feature requests. So we have plenty of work on our plates J At CodePlex, anyone can vote for an issue to be fixed. Originally, we planned to fix issues in order of their votes. However, we quickly discovered that this approach was inefficient. Constantly switching back and forth between different controls was too time-consuming. It takes time to re-familiarize yourself with a control. Instead, we decided to focus on two or three controls each month and really focus on fixing the issues with those controls. This way, we can fix sets of related issues and avoid the randomization caused by context switching. Our team works in monthly sprints. We plan to do another release of the Ajax Control Toolkit each and every month. So far, we have competed one release of the Ajax Control Toolkit which was released on April 1, 2011. We plan to release a new version in early May. Conclusion Fortunately, I work with a team of smart developers. We currently have 5 developers working on the Ajax Control Toolkit (not full-time, they are also building two very cool ASP.NET MVC applications). All the developers who work on our team are required to have strong JavaScript, jQuery, and ASP.NET MVC skills. In the interest of being as transparent as possible about our work on the Ajax Control Toolkit, I plan to blog frequently about our team’s ongoing work. In my next blog entry, I plan to write about the two Ajax Control Toolkit controls which are the focus of our work for next release.

    Read the article

  • SQL SERVER – How to Recover SQL Database Data Deleted by Accident

    - by Pinal Dave
    In Repair a SQL Server database using a transaction log explorer, I showed how to use ApexSQL Log, a SQL Server transaction log viewer, to recover a SQL Server database after a disaster. In this blog, I’ll show you how to use another SQL Server disaster recovery tool from ApexSQL in a situation when data is accidentally deleted. You can download ApexSQL Recover here, install, and play along. With a good SQL Server disaster recovery strategy, data recovery is not a problem. You have a reliable full database backup with valid data, a full database backup and subsequent differential database backups, or a full database backup and a chain of transaction log backups. But not all situations are ideal. Here we’ll address some sub-optimal scenarios, where you can still successfully recover data. If you have only a full database backup This is the least optimal SQL Server disaster recovery strategy, as it doesn’t ensure minimal data loss. For example, data was deleted on Wednesday. Your last full database backup was created on Sunday, three days before the records were deleted. By using the full database backup created on Sunday, you will be able to recover SQL database records that existed in the table on Sunday. If there were any records inserted into the table on Monday or Tuesday, they will be lost forever. The same goes for records modified in this period. This method will not bring back modified records, only the old records that existed on Sunday. If you restore this full database backup, all your changes (intentional and accidental) will be lost and the database will be reverted to the state it had on Sunday. What you have to do is compare the records that were in the table on Sunday to the records on Wednesday, create a synchronization script, and execute it against the Wednesday database. If you have a full database backup followed by differential database backups Let’s say the situation is the same as in the example above, only you create a differential database backup every night. Use the full database backup created on Sunday, and the last differential database backup (created on Tuesday). In this scenario, you will lose only the data inserted and updated after the differential backup created on Tuesday. If you have a full database backup and a chain of transaction log backups This is the SQL Server disaster recovery strategy that provides minimal data loss. With a full chain of transaction logs, you can recover the SQL database to an exact point in time. To provide optimal results, you have to know exactly when the records were deleted, because restoring to a later point will not bring back the records. This method requires restoring the full database backup first. If you have any differential log backup created after the last full database backup, restore the most recent one. Then, restore transaction log backups, one by one, it the order they were created starting with the first created after the restored differential database backup. Now, the table will be in the state before the records were deleted. You have to identify the deleted records, script them and run the script against the original database. Although this method is reliable, it is time-consuming and requires a lot of space on disk. How to easily recover deleted records? The following solution enables you to recover SQL database records even if you have no full or differential database backups and no transaction log backups. To understand how ApexSQL Recover works, I’ll explain what happens when table data is deleted. Table data is stored in data pages. When you delete table records, they are not immediately deleted from the data pages, but marked to be overwritten by new records. Such records are not shown as existing anymore, but ApexSQL Recover can read them and create undo script for them. How long will deleted records stay in the MDF file? It depends on many factors, as time passes it’s less likely that the records will not be overwritten. The more transactions occur after the deletion, the more chances the records will be overwritten and permanently lost. Therefore, it’s recommended to create a copy of the database MDF and LDF files immediately (if you cannot take your database offline until the issue is solved) and run ApexSQL Recover on them. Note that a full database backup will not help here, as the records marked for overwriting are not included in the backup. First, I’ll delete some records from the Person.EmailAddress table in the AdventureWorks database.   I can delete these records in SQL Server Management Studio, or execute a script such as DELETE FROM Person.EmailAddress WHERE BusinessEntityID BETWEEN 70 AND 80 Then, I’ll start ApexSQL Recover and select From DELETE operation in the Recovery tab.   In the Select the database to recover step, first select the SQL Server instance. If it’s not shown in the drop-down list, click the Server icon right to the Server drop-down list and browse for the SQL Server instance, or type the instance name manually. Specify the authentication type and select the database in the Database drop-down list.   In the next step, you’re prompted to add additional data sources. As this can be a tricky step, especially for new users, ApexSQL Recover offers help via the Help me decide option.   The Help me decide option guides you through a series of questions about the database transaction log and advises what files to add. If you know that you have no transaction log backups or detached transaction logs, or the online transaction log file has been truncated after the data was deleted, select No additional transaction logs are available. If you know that you have transaction log backups that contain the delete transactions you want to recover, click Add transaction logs. The online transaction log is listed and selected automatically.   Click Add if to add transaction log backups. It would be best if you have a full transaction log chain, as explained above. The next step for this option is to specify the time range.   Selecting a small time range for the time of deletion will create the recovery script just for the accidentally deleted records. A wide time range might script the records deleted on purpose, and you don’t want that. If needed, you can check the script generated and manually remove such records. After that, for all data sources options, the next step is to select the tables. Be careful here, if you deleted some data from other tables on purpose, and don’t want to recover them, don’t select all tables, as ApexSQL Recover will create the INSERT script for them too.   The next step offers two options: to create a recovery script that will insert the deleted records back into the Person.EmailAddress table, or to create a new database, create the Person.EmailAddress table in it, and insert the deleted records. I’ll select the first one.   The recovery process is completed and 11 records are found and scripted, as expected.   To see the script, click View script. ApexSQL Recover has its own script editor, where you can review, modify, and execute the recovery script. The insert into statements look like: INSERT INTO Person.EmailAddress( BusinessEntityID, EmailAddressID, EmailAddress, rowguid, ModifiedDate) VALUES( 70, 70, N'[email protected]' COLLATE SQL_Latin1_General_CP1_CI_AS, 'd62c5b4e-c91f-403f-b630-7b7e0fda70ce', '20030109 00:00:00.000' ); To execute the script, click Execute in the menu.   If you want to check whether the records are really back, execute SELECT * FROM Person.EmailAddress WHERE BusinessEntityID BETWEEN 70 AND 80 As shown, ApexSQL Recover recovers SQL database data after accidental deletes even without the database backup that contains the deleted data and relevant transaction log backups. ApexSQL Recover reads the deleted data from the database data file, so this method can be used even for databases in the Simple recovery model. Besides recovering SQL database records from a DELETE statement, ApexSQL Recover can help when the records are lost due to a DROP TABLE, or TRUNCATE statement, as well as repair a corrupted MDF file that cannot be attached to as SQL Server instance. You can find more information about how to recover SQL database lost data and repair a SQL Server database on ApexSQL Solution center. There are solutions for various situations when data needs to be recovered. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Backup and Restore, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Top tweets SOA Partner Community – October 2012

    - by JuergenKress
    Send your tweets @soacommunity #soacommunity and follow us at http://twitter.com/soacommunity SOA Community Deploying Fusion Order Demo on 11.1.1.6 by Antony Reynolds http://wp.me/p10C8u-vA leonsmiers ?Cant wait to test it >> 't waiRT @OracleSOA: Case Management patterns, session coverage from #OOW #OracleBPM #ACM #BPM http://bit.ly/OdcZL6 Danilo Schmiedel Bye bye San Francisco. #oow was a great conference in a wonderful city! Thanks! @soacommunity pic.twitter.com/lcYSe9xC OPITZ CONSULTING ?The Journey towards #Oracle #BPM @OpenWorld 2012 - Slides by @t_winterberg & H. Normann: http://ow.ly/edkWE #oow demed Full house at the SOA Customer Advisory Board! #oow12 http://instagr.am/p/QX9B8eLMLS/ Danilo Schmiedel "@whitehorsesnl: Had some great talks with the BPM guys at the DEMOgrounds. It is one of the best things at #oow" -> I agree!! @soacommunity Mark Simpson ?Fusion Middleware Global Innovation Awards: nice to pick up a soa and bpm with our customer. #oow Mark Simpson ?RT @SOASimone: #oraclesoa #oow hands on lab fully booked pic.twitter.com/pwI94Ew7 <--quick, provision some more compute power on the cloud! Oracle SOA ?Join us for BPM and Analytics: Process Dashboards. BAM, and Intelligent OptimizationMoscone South - 308#OracleBPM #OOW Oracle SOA ?Real-time public safety demo! License plate recognition and processing in London via Oracle Event Processing. #oow pic.twitter.com/WufesDBq Marc ?Nice session on customer success stories on #SOA11g on with @SOASimone Pro and cons and architectural overview. #oow pic.twitter.com/bzuhsujm Lucas Jellema Full length Keynote on Middleware #oow : http://medianetwork.oracle.com/video/player/1873556035001 … #oow_amis OracleBlogs ?Why Fusion Middleware matters to Oracle Applications and Fusion Applications customers? http://ow.ly/2stVQ0 OracleBlogs ?Open World Session - BPM, SOA and ADF Combined:Patterns learned from Fusion Applications http://ow.ly/2suhzf Ronald Luttikhuizen ?VENNSTER BLOG | Presentations at OpenWorld 2012 | http://blog.vennster.nl/2012/10/presentations-at-openworld-2012.html … Andrejus Baranovskis @dschmied @soacommunity next OOW for sure, and may be SOA community event ! @soacommunity Danilo Schmiedel ?@andrejusb Thanks Andrejus - I really enjoyed having a session with you at #oow. When is next time :-) ? @soacommunity Lionel Dubreuil ?@soacommunity #oow12 Today-1:15pm-Marriott Marquis Salon 7 Jump-starting Integration with Oracle Foundation Pack http://bit.ly/QKKJzF Ronald Luttikhuizen ?Impression from our fault handling session in OSB and SOA Suite from the audience @soacommunity @gschmutz #oow pic.twitter.com/WSg1Z89E Marc Nice session on Oracle Virtual Assembly for #SOA11g, @soacommunity Works with #exalogic but not required SOA Community ?Send your #soacommunity #oow pictures and blog posts @soacommunity or http://www.facebook.com/soacommunity Enjoy OOW ;-) Jon petter hjulstad Oracle BPM- Big leap forward in 11.1.1.7 ! Whitehorses ?Common BPM Use Cases from Oracle #bpm #oow pic.twitter.com/ofOv04EF Whitehorses ?Oracle BPM 11.1.1.7 top new features. Interesting #oow #oowbenelux pic.twitter.com/HY9QN5un SOA Community Industrialized SOA - topic of Business Technology Magazine http://wp.me/p10C8u-vi orclateamsoa ?A-Team Blog #ateam: The curious case of SOA Human tasks' automatic completion http://ow.ly/1mq6YU Simone Geib Look for this sign #oow #oraclesoa pic.twitter.com/MJsPV4PO Lucas Jellema My summary of Larry Ellison's keynote at #oow on the AMIS Blog: http://technology.amis.nl/2012/10/01/oow-2012-larry-ellisons-keynote-announcements-exa-cloud-database/ … #oow_amis gschmutz ?Join my #oow session "Five Cool Use Cases for the Spring Component" to see the power of Spring and SOA Suite combined! Moscone 310 - 3:15 PM Ronald Luttikhuizen Thanks to @soacommunity for great SOA/BPM dinner event yesterday night! #oow pic.twitter.com/v7x3i0DC OracleBlogs ?OSB, Service Callouts and OQL http://ow.ly/2sq6B2 OracleBlogs ?Cloud and On-Premises Applications Integration using Oracle Integration Adapters http://ow.ly/2sqiDy OracleBlogs ?Adapters, SOA Suite and More @Openworld 2012 http://ow.ly/2srdTg Eric Elzinga ?OSB, Service Callouts and OQL - Part 3, http://see.sc/JodzEx #oracleservicebus Donatas Valys interesting articles about soa industrialization to read #soa #industrialization http://it-republik.de/business-technology/bt-magazin-ausgaben/Industrialized-SOA-000516.html … gschmutz ?“@techsymp: 2012 Symposium Presentation Download Page Now Available! 75% of presentations published. http://www.servicetechsymposium.com ” find mine there.. Oracle BPM Customer Experience and BPM – From Efficiency to Engagement #bpm #oraclebpm #processmanagement #socialbpm http://pub.vitrue.com/Tahi SOA Community ?@soacommunity SOA Community Newsletter September 2012 http://wp.me/p10C8u-wa SOA Community again again again.... it is Oracle Open World 2012 http://wp.me/p10C8u-wk OracleBlogs ?SOA Proactive support http://ow.ly/2smrSJ demed ?@gschmutz on NoSQL at @techsymp http://lockerz.com/s/247601661 demed ?Just finished "#BigData and its impact on #SOA" talk @techsymp. Really enjoyed getting out of beaten path. #london #oep http://lockerz.com/s/247636974 OTNArchBeat ?Need help selling SOA to business stakeholders? Give them this free eBook. #soasuite http://pub.vitrue.com/hsQY SOA Community top Tweets SOA Partner Community &ndash; September 2012 http://wp.me/p10C8u-vc SOA Community Move Data into the grid for scalable, predictable response times http://wp.me/p10C8u-vv ServiceTechSymposium ?The September issue of the Service Technology Magazine is now published with six new items! Read them at http://www.servicetechmag.com Marc ?Reviewed @Packt_OracleFMW new book on SOA11g administration! Very good ! http://tinyurl.com/8pzd5ww SOA Community ?BPM Solution Catalogue&ndash;promote your process templates http://wp.me/p10C8u-vt OTNArchBeat ?BPM ADF Task forms: Checking whether the current user is in a BPM Swimlane | @ChrisKarlChan http://pub.vitrue.com/aPMG OTNArchBeat ?Cloud, automation drive new growth in SOA governance market | @JoeMcKendrick http://pub.vitrue.com/hNPv Simon Haslam ?Looking for "oak style"(!) advanced content but you're a middleware specialist? See #ukoug2012 #middlewaresunday http://2012.ukoug.org/default.asp?p=9355 … Simon Haslam ?The #ukoug2012 agenda is "go, go, go!" (as Murray would say!) http://2012.ukoug.org/agendagrid Germán Gazzoni SOA Spezial II verfügbar – Industralized SOA: Die überarbeitete und ergänzte Neuauflage des SOA Spezial Sonderhe... http://bit.ly/PAWwN9 Oracle SOA ?Flip thru new interactive "Oracle SOA Suite eBook-In the Customers Words" #middleware #soa #oraclesoa http://pub.vitrue.com/NzFZ SOA Community Follow SOA Community on Facebook http://www.facebook.com/soacommunity #soacommunity #opn SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: SOA Community twitter,SOA Community,Oracle SOA,Oracle BPM,BPM Community,OPN,Jürgen Kress

    Read the article

  • Microsoft TechEd 2010 - Day 3 @ Bangalore

    - by sathya
    Microsoft TechEd 2010 - Day 3 @ Bangalore Sorry for my delayed post on day 3 because I had to travel from Blore to Chennai So I couldnt write for the past two days. On day 3 as usual we had lot of simultaneous tracks on various sessions. This day I choose the Your Data, Our Platform Track. It had sessions on the following 5 topics :   Developing Data-tier Applications in Visual Studio 2010 - by Sanjay Nagamangalam SQL Server Query Optimization, Execution and Debugging Query Performance - by Vinod Kumar M SQL Server Utility - Its about more than 1 SQL Server - by Vinod Kumar Jagannathan Data Recovery / Consistency with CheckDB - by Vinod Kumar M Developing with SQL Server Spatial and Deep dive into Spatial Indexing - by Pinal Dave Developing Data-tier Applications in Visual Studio 2010 - by Sanjay Nagamangalam This was one of the superb sessions i have attended. He explained all the concepts in detail with a demo. The important thing in this is there is something called Data-Tier application project which is newly introduced in this VS2010 with which we can manage all our data along with our application inside our VS itself. We can create DB,Tables,Procs,Views etc. here itself and once we deploy it creates a compressed file called .dacpac which stores all the changes in Table Schema,Created procs, etc. on to that single file which reduces our (developer's) effort in preparing the deployment scripts and giving it to the DBA. It also has some policy configurations which can be managed easily by checking some rules like in outlook. For Ex : IF the SQL Server Version > 10 then deploy else dont. This rule specifies that even if we try to deploy on SQL Server DB with version less than 10 It will not do it. And if we deploy some .dacpac to SQL server production db with the option upgrade DB with this dacpac once everything completes successfully it will say success else it rollsback to the prior version. Even if it gets deployed successfully and later @ a point of time you wish to revert it back to the prior version, you can go ahead and delete the existing dacpac version so that it reverts to the older version of the db changes. And for the good questions that were asked in the session T-Shirts were given. SQL Server Query Optimization, Execution and Debugging Query Performance - by Vinod Kumar M This one too was the best session. The speaker Vinod explained everything very much clearly. This was really useful session and you dont believe, as per my knowledge, in the total 3 days in the TechEd except the Keynote, for this session seats were full (House FULL)  People were even standing out to attend this session. Such a great one it was. The speaker did a deep dive in to the Query Plan section and showed which actually causes the problem. Its all about the thing that we need to understand about the execution of SQL server Queries. We think in a way and SQL Server never executes in that way. We need to understand that first. He also told about there might be two plans generated for a single query at a point of time because of parallel processors in the system. The Key is here in every query. There is something called Estimated Row Count and Actual Row Count in the query plan. If the estimated row count by SQL server tallies with the actual row count your performance will be awesome. He said some tweaks to achieve the same. After this as usual we had lunch SQL Server Utility - Its about more than 1 SQL Server - by Vinod Kumar Jagannathan This was more of a DBA's session. Am really sorry I was totally blank and I was not interested to attend this session and walked out to attend Migrating to the cloud by Harish Ranganathan (My favorite Speaker) but unfortunately that was some other persons session. There the speaker was telling about how to configure the connection strings in such a way that we can connect to the SQL Azure platform from our VS and also showed us how to deploy the same in to Windows Azure. In between there were lot of technical problems like laptop hang, user locked and he was switching between systems, also i came in the half so i wasnt able to listen that fully. In between, Since I got an MCTS certification they gave me T-Shirt with the lines 'Iam Certified. Are you?' and they asked me to wear that. If we wear that we might get spotted and they would give us some goodies  So on the 3rd day I was wearing that T-Shirt. I got spotted by the person Tarun who was coordinating things about the certification, and he was accompanied with a cameraman and they interviewed me about the certification and I was shown live in the Teched and was seen by 60000 live viewers of the TechEd. I was really happy on that. Data Recovery / Consistency with CheckDB - by Vinod Kumar M This was one of the best sessions too in the TechEd. This guy is really amazing. In front of us he crashed a DB and showed how to recover the same in 6 different ways for different no of failures. Showed about Different types of error msgs like : 823,824,825 msdb..suspect_pages DBCC CheckDB (different parameters to it) I am really waiting for his session to get uploaded live in the Teched Website. Here is his contact info If you wish to connect to him : Twitter : @vinodk_sql Website : www.ExtremeExperts.com Blog : http://blogs.sqlxml.org/vinodkumar Developing with SQL Server Spatial and Deep dive into Spatial Indexing - by Pinal Dave Pinal Dave is a King in SQL and he is a SQL MVP and he is the owner of SQLAuthority.com He took the session on Spatial Databases from the start. Showed about the different types of Spatial : Geometric and Geographic Geometric : x and y axis its a planar surface Geographic : Spherical surface with 3600  as the maximum which is used to represent the geographic points on the earth and easy to draw maps of different kinds. He had a lot of obstacles during his session like rain coming inside the hall, mic wires got bursted due to rain, Videos off on the display screens. In spite of that he asked the audience to come in the front rows and managed to take a good session without ppts and finally we got the displays on and he was showing demos on the same what he explained orally. That was really a fun filled informative session. He gave some books for the persons who asked good questions and answered well for his questions and I got one too  (It was a book on Data Mining - Wrox Publishers) And finally after all these things there was Keynote session for close of the TechEd. and we all assembled in a big hall where Mr.Ashok Soota, a man of age around 70  co-founder of Mindtree was called to give some lecture on his successes. He was explaining about his past and what all companies he switched and for what reasons and what are all his successes and what are all his failures and the learnings of him from his past failures. and his success and failures on his partnerships with the other concern. And there were some questions for him like What is your suggestion on young entrepreneur? How did you learn from past failures? What is reiterating your success? What is your suggestion on partnerships? How to choose partnerships? etc. And they said @ 7.30 Pm there would be a party night, but unfortunately i was not able to attend that because I had to catch my train and before that i had to pack things, so I started @ 7 itself. Thats it about the TechED!!! Stay tuned for further Technology updates.

    Read the article

  • svnstat script

    - by Kyle Hodgson
    So I'm building out a shell script to check out all of our relevant svn repositories for analysis in svnstat. I've gotten all of this to work manually, now I'm writing up a bash script in cygwin on my Vista laptop, as I intend to move this to a Linux server at some point. Edit: I gave up on this and wrote a simple .bat script. I'll figure out the Linux deployment some other way. Edit: added the sleep 30 and svn log commands. I can tell now, with the svn log command, that it's not getting to the svn log ... this time, it did Applications, and ran the log, and then check out Database, and froze. I'll put the sleep 30 before and after the log this time. co2.sh #!/bin/bash function checkout { mkdir $1 svn checkout svn://dev-server/$1 $1 svn log --verbose --xml >> svn.log $1 sleep 30 } cd /cygdrive/c/Users/My\ User/Documents/Repos/wc checkout Applications checkout Database checkout WebServer/www.mysite.com checkout WebServer/anotherhost.mysite.com checkout WebServer/AnotherApp checkout WebServer/thirdhost.mysite.com checkout WebServer/fourthhost.mysite.com checkout WebServer/WebServices It works, for the most part - but for some reason it has a tendency to stop working after a few repositories, usually right after finishing a repository before going to the next one. When it fails, it will not recover on its own. I've tried commenting out the svn line, it goes in and creates all the directories just fine when I do that - so its not that. I'm looking for direction as well as direct advice. Cygwin has been very stable for me, but I did start using the native rxvt instead of "bash in a cmd.exe window" recently. I don't think that's the problem, as I've left top on remote systems running all night and rxvt didn't seem to mind. Also I haven't done any bash scripting in cygwin so I suppose this might not be recommended; though I can't see why not. I don't want all of WebServer, hence me only checking out certain folders like that. What I suspect is that something is hanging up the svn checkout. Any ideas here? Edit: this time when I hit ctrl+z to cancel out, I forgot I was on Windows and typed ps to see if the job was still running; and as you can see there are lots of svn processes hanging around... strange. Kyle Hodgson@KyleHodgson-PC ~/winUser/Documents/Repos $ Kyle Hodgson@KyleHodgson-PC ~/winUser/Documents/Repos $ jobs [1]- Stopped bash co2.sh [2]+ Stopped ./co2.sh Kyle Hodgson@KyleHodgson-PC ~/winUser/Documents/Repos $ kill %1 [1]- Stopped bash co2.sh Kyle Hodgson@KyleHodgson-PC ~/winUser/Documents/Repos $ [1]- Terminated bash co2.sh Kyle Hodgson@KyleHodgson-PC ~/winUser/Documents/Repos $ Kyle Hodgson@KyleHodgson-PC ~/winUser/Documents/Repos $ Kyle Hodgson@KyleHodgson-PC ~/winUser/Documents/Repos $ ps PID PPID PGID WINPID TTY UID STIME COMMAND 7872 1 7872 2340 0 1000 Jun 29 /usr/bin/svn 7752 1 6140 7828 1 1000 Jun 29 /usr/bin/svn 6192 1 5044 2192 1 1000 Jun 30 /usr/bin/svn 7292 1 7452 1796 1 1000 Jun 30 /usr/bin/svn 6236 1 7304 7468 2 1000 Jul 2 /usr/bin/svn 1564 1 5032 7144 2 1000 Jul 2 /usr/bin/svn 9072 1 3960 6276 3 1000 Jul 3 /usr/bin/svn 5876 1 5876 5876 con 1000 11:22:10 /usr/bin/rxvt 924 5876 924 10192 4 1000 11:22:10 /usr/bin/bash 7212 1 7332 5584 4 1000 13:17:54 /usr/bin/svn 9412 1 5480 8840 4 1000 15:38:16 /usr/bin/svn S 8128 924 8128 9452 4 1000 17:38:05 /usr/bin/bash 9132 8128 8128 8172 4 1000 17:43:25 /usr/bin/svn 3512 1 3512 3512 con 1000 17:43:50 /usr/bin/rxvt I 10200 3512 10200 6616 5 1000 17:43:51 /usr/bin/bash 9732 1 9732 9732 con 1000 17:45:55 /usr/bin/rxvt 3148 9732 3148 8976 6 1000 17:45:55 /usr/bin/bash 5856 3148 5856 876 6 1000 17:51:00 /usr/bin/vim 7736 924 7736 8036 4 1000 17:53:26 /usr/bin/ps Kyle Hodgson@KyleHodgson-PC ~/winUser/Documents/Repos $ jobs [2]+ Stopped ./co2.sh Kyle Hodgson@KyleHodgson-PC ~/winUser/Documents/Repos $ Here's an strace on the PID of the hung svn program, it's been like this for hours. Looks like its just doing nothing. I keep suspecting that some interruption on the server is causing this; does svn have a locking mechanism I'm not aware of? Kyle Hodgson@KyleHodgson-PC ~/winUser/Documents/Repos $ strace -p 7304 ********************************************** Program name: C:\cygwin\bin\svn.exe (pid 7304, ppid 6408) App version: 1005.25, api: 0.156 DLL version: 1005.25, api: 0.156 DLL build: 2008-06-12 19:34 OS version: Windows NT-6.0 Heap size: 402653184 Date/Time: 2009-07-06 18:20:11 **********************************************

    Read the article

< Previous Page | 74 75 76 77 78 79 80 81 82 83 84  | Next Page >