Search Results

Search found 7085 results on 284 pages for 'termine updates'.

Page 260/284 | < Previous Page | 256 257 258 259 260 261 262 263 264 265 266 267  | Next Page >

  • Fail to save a managed object to core-data after its properties were updated.

    - by Tzur Gazit
    I have to trouble to create the object, but updating it fails. Here is the creation code: // Save data from pList to core data fro the first time - (void) saveToCoreData:(NSDictionary *)plistDictionary { // Create system parameter entity SystemParameters *systemParametersEntity = (SystemParameters *)[NSEntityDescription insertNewObjectForEntityForName:@"SystemParameters" inManagedObjectContext:mManagedObjectContext]; //// // GPS SIMULATOR //// NSDictionary *GpsSimulator = [plistDictionary valueForKey:@"GpsSimulator"]; [systemParametersEntity setMGpsSimulatorEnabled:[[GpsSimulator objectForKey:@"Enabled"] boolValue]]; [systemParametersEntity setMGpsSimulatorFileName:[GpsSimulator valueForKey:@"FileName"]]; [systemParametersEntity setMGpsSimulatorPlaybackSpeed:[[GpsSimulator objectForKey:@"PlaybackSpeed"] intValue]]; [self saveAction]; } During execution the cached copy is changed and then it is saved (or trying) to the database. Here is the code to save the changed copy: // Save data from pList to core data fro the first time - (void) saveSystemParametersToCoreData:(SystemParameters *)theSystemParameters { // Step 1: Select Data NSFetchRequest *fetchRequest = [[NSFetchRequest alloc] init]; NSEntityDescription *entity = [NSEntityDescription entityForName:@"SystemParameters" inManagedObjectContext:mManagedObjectContext]; [fetchRequest setEntity:entity]; NSError *error = nil; NSArray *items = [self.managedObjectContext executeFetchRequest:fetchRequest error:&error]; [fetchRequest release]; if (error) { NSLog(@"CoreData: saveSystemParametersToCoreData: Unresolved error %@, %@", error, [error userInfo]); abort(); } // Step 2: Update Object SystemParameters *systemParameters = [items objectAtIndex:0]; //// // GPS SIMULATOR //// [systemParameters setMGpsSimulatorEnabled:[theSystemParameters mGpsSimulatorEnabled]]; [systemParameters setMGpsSimulatorFileName:[theSystemParameters mGpsSimulatorFileName]]; [systemParameters setMGpsSimulatorPlaybackSpeed:[theSystemParameters mGpsSimulatorPlaybackSpeed]]; // Step 3: Save Updates [self saveAction]; } As to can see, I fetch the object that I want to update, change its values and save. Here is the saving code: - (void)saveAction { NSError *error; if (![[self mManagedObjectContext] save:&error]) { NSLog(@"ERROR:saveAction. Unresolved Core Data Save error %@, %@", error, [error userInfo]); exit(-1); } } The Persistent store method: - (NSPersistentStoreCoordinator *)persistentStoreCoordinator { if (mPersistentStoreCoordinator != nil) { return mPersistentStoreCoordinator; } NSString *path = [self databasePath]; NSURL *storeUrl = [NSURL fileURLWithPath:path]; NSError *error = nil; mPersistentStoreCoordinator = [[NSPersistentStoreCoordinator alloc] initWithManagedObjectModel:[self managedObjectModel]]; if (![mPersistentStoreCoordinator addPersistentStoreWithType:NSSQLiteStoreType configuration:nil URL:storeUrl options:nil error:&error]) { NSLog(@"Unresolved error %@, %@", error, [error userInfo]); abort(); } return mPersistentStoreCoordinator; } There is no error but the sqLite file is not updated, hence the data is not persistent. Thanks in advance.

    Read the article

  • Can I use WCF to replace my current Web Service and Window Service combination?

    - by gun_shy
    I need a little bit of advise regarding the situation I am faced with. The current arrangement I have been tasked with improving just doesn't sit well with me and I feel like there is a better way to do it. The more I read about WCF, the more I get the feeling that it might be what I am looking for. Right now, I have an asp.net client, a .net web service, a windows service, a ms sql database, and a third party application that is used for processing a group of 'project' files into a finalized file. Since the third party application can only handle processing one 'project' at a time, the combination of the web service, window service, and database have been arranged to create a job queue manager for the third party application. The client sends a zip 'project' file containing multiple sub files to the web service. The web service adds a new 'project' line to the database, generating a unique job id. The zip file is expanded to a folder location on the server using the job id as the folder name. The web service then returns the job id to the client. The client will use this id to poll the web service for the status of the job submitted. When the job is complete, the client will request the processed file. The windows service polls the database every x minutes. If a new job exists, the service will pull the oldest job and send it to the third party app for processing. If the processing succeeds, the window service updates the project line in the database, marking the job complete. The window service will continue to process any non complete jobs in the database until there are no more. When it stops finding any jobs, it will sleep x minutes and then poll the database again. I do not like the fact that the window service has to poll the database. If there is only one job submitted, the client will have to wait for the window service to poll and then wait while the 'project' is being processed. It seems like WCF could be used to combine the web and window services using a combination of the InstanceContextMode.Single and ConcurrencyMode.Multiple. So far, I have been unable to find any articles or examples that would point me in the right direction. Can WCF be utilized to accomplish the job queue logic of the current arrangement in a better way? As always, any help is more than appreciated.

    Read the article

  • SQL Design Question regarding schema and if Name value pair is the best solution

    - by Aur
    I am having a small problem trying to decide on database schema for a current project. I am by no means a DBA. The application parses through a file based on user input and enters that data in the database. The number of fields that can be parsed is between 1 and 42 at the current moment. The current design of the database is entirely flat with there being 42 columns; some have repeated columns such as address1, address2, address3, etc... This says that I should normalize the data. However, data integrity is not needed at this moment and the way the data is shaped I'm looking at several joins. Not a bad thing but the data is still in a 1 to 1 relationship and I still see a lot of empty fields per row. So my concerns are that this does not allow the database or the application to be very extendable. If they want to add more fields to be parsed (which they do) than I'd need to create another table and add another foreign key to the linking table. The third option is I have a table where the fields are defined and a table for each record. So what I was thinking is to make a table that stores the value and then links to those two tables. The problem is I can picture the size of that table growing large depending on the input size. If someone gives me a file with 300,000 records than 300,000 x 40 = 12 million so I have some reservations. However I think if I get to that point than I should be happy it is being used. This option also allows for more custom displaying of information albeit a bit more work but little rework even if you add more fields. So the problem boils down to: 1. Current design is a flat file which makes extending it hard and it is not normalized. 2. Normalize the tables although no real benefits for the moment but requirements change. 3. Normalize it down into the name value pair and hope size doesn't hurt. There are a large number of inserts, updates, and selects against that table. So performance is a worry but I believe the saying is design now, performance testing later? I'm probably just missing something practical so any comments would be appreciated even if it’s a quick sanity check. Thank you for your time.

    Read the article

  • Self referencing a table

    - by mue
    Hello, so I'm new to NHibernate and have a problem. Perhaps somebody can help me here. Given a User-class with many, many properties: public class User { public virtual Int64 Id { get; private set; } public virtual string Firstname { get; set; } public virtual string Lastname { get; set; } public virtual string Username { get; set; } public virtual string Email { get; set; } ... public virtual string Comment { get; set; } public virtual UserInfo LastModifiedBy { get; set; } } Here some DDL for the table: CREATE TABLE USERS ( "ID" BIGINT NOT NULL , "FIRSTNAME" VARCHAR(50) NOT NULL , "LASTNAME" VARCHAR(50) NOT NULL , "USERNAME" VARCHAR(128) NOT NULL , "EMAIL" VARCHAR(128) NOT NULL , ... "LASTMODIFIEDBY" BIGINT NOT NULL , ) IN "USERSPACE1" ; Database-table-field 'LASTMODIFIEDBY' holds for auditing purposes the Id from the User who is acting in case of inserts or updates. This would normally be an admin. Because the UI shall display not this Int64 but admins name (pattern like 'Lastname, Firstname') I need to retrieve these values by self referencing table USERS to itself. Next is, that a whole object of type User would be overkill by the amount of unwanted fields. So there is a class UserInfo with much smaller footprint. public class UserInfo { public Int64 Id { get; set; } public string Firstname { get; set; } public string Lastname { get; set; } public string FullnameReverse { get { return string.Format("{0}, {1}", Lastname ?? string.Empty, Firstname ?? string.Empty); } } } So here starts the problem. Actually I have no clue how to accomplish this task. Im not sure if I also must provide a mapping for class UserInfo and not only for class User. I'd like to integrate class UserInfo as Composite-element within the mapping for User-class. But I dont no how to define the mapping between USERS.ID and USERS.LASTMODIFIEDBY table-fields. Hopefully I decribes my problem clear enough to get some hints. Thanks alot!

    Read the article

  • Rails controller not rendering correct view when form is force-submitted by Javascript

    - by whazzmaster
    I'm using Rails with jQuery, and I'm working on a page for a simple site that prints each record to a table. The only editable field for each record is a checkbox. My goal is that every time a checkbox is changed, an ajax request updates that boolean attribute for the record (i.e., no submit button). My view code: <td> <% form_remote_tag :url => admin_update_path, :html => { :id => "form#{lead.id}" } do %> <%= hidden_field :lead, :id, :value => lead.id %> <%= check_box :lead, :contacted, :id => "checkbox"+lead.id.to_s, :checked => lead.contacted, :onchange => "$('#form#{lead.id}').submit();" %> <% end %> </td> In my routes.rb, admin_update_path is defined by map.admin_update 'update', :controller => "admin", :action => "update", :method => :post I also have an RJS template to render back an update. The contents of this file is currently just for testing (I just wanted to see if it worked, this will not be the ultimate functionality on a successful save)... page << "$('#checkbox#{@lead.id}').hide();" When clicked, the ajax request is successfully sent, with the correct params, and the action on the controller can retrieve the record and update it just fine. The problem is that it doesn't send back the JS; it changes the page in the browser and renders the generated Javascript as plain text rather than executing it in-place. Rails does some behind-the-scenes stuff to figure out if the incoming request is an ajax call, and I can't figure out why it's interpreting the incoming request as a regular web request as opposed to an ajax request. I may be missing something extremely simple here, but I've kind-of burned myself out looking so I thought I'd ask for another pair of eyes. Thanks in advance for any info!

    Read the article

  • Communicate progress from local Service

    - by kpdvx
    An application I'm building uses a local Service for downloading files from the web to the phone's SD card. In this app users can browse lists of books, and read them while online. A user can also download a pdf copy of a book for offline viewing. To handle downloads I'm using a locally bound Service. I do not want this Service to run all the time, only when downloading files. So that the Service can shut itself down when its tasks are complete, I am not binding to the service, rather I'm sending an "enqueue for download" command through the Intent passed to Context.startService. Books available for download are shown in a list. A user can choose to download a book by clicking on its row in the list. On download, I need to show download progress using a ProgressBar on the actual book list row. I need to also show, on the rows, if a book is enqueued for download, or if its download has completed or failed. The books can be shown in different activities throughout the application--in search, or in the user's list of favorite books, for example. When the books are shown in different places, these are not the same objects, but they are uniquely identified by their bookId. Because I do not want to bind to the service from every Activity, my tentative plan was to use a public static final HashMap on the Service class itself to contain a mapping of bookId to download status, an enum of enqueued, downloading, cancelled, etc. Each book view, when displayed, would check this static HashMap, and if the bookId is in the map, retrieve and display its status. I don't particularly like this idea, but at the moment it's the only way I can think of to retrieve status from the Service without having to bind to it and start it. Additionally I need to retrieve download progress percent from the Service, for a given bookId, if it is the active download. Again I'd rather not bind to the service from every activity, so I'm not sure how to go about retrieving current progress from the Service. My current plan is to use some sort of singleton mediator, that the Service will push updates to, and the views can read from. But I'm not terribly happy with this idea. The reason I'd like to avoid binding to the Service from each Activity is 1.) I'm already running another Service and 2.) binding is verbose and I'd like to avoid needing to pass around a reference to the Service (but admittedly this isn't too much of a problem). Perhaps binding to the local Service isn't expensive enough to warrant this other setup? Should I not be concerned about binding to it from each Activity? Maybe this is a non-issue?

    Read the article

  • Drupal's profile_save_profile Doesn't Work in hook_cron, When Run by the Server's cron

    - by anschauung
    I have a problem with the following implementation of hook_cron in Drupal 6.1.3. The script below runs exactly as expected: it sends a welcome letter to new members, and updates a hidden field in their profile to designate that the letter has been sent. There are no errors in the letter, all new members are accounted for, etc. The problem is that the last line -- updating the profile -- doesn't seem to work when Drupal cron is invoked by the 'real' cron on the server. When I run cron manually (such as via /admin/reports/status/run-cron) the profile fields get updated as expected. Any suggestions as to what might be causing this? (Note, since someone will suggest it: members join by means outside of Drupal, and are uploaded to Drupal nightly, so Drupal's built-in welcome letters won't work (I think).) <?php function foo_cron() { // Find users who have not received the new member letter, // and send them a welcome email // Get users who have not recd a message, as per the profile value setting $pending_count_sql = "SELECT COUNT(*) FROM {profile_values} v WHERE (v.value = 0) AND (v.fid = 7)"; //fid 7 is the profile field for profile_intro_email_sent if (db_result(db_query($pending_count_sql))) { // Load the message template, since we // know we have users to feed into it. $email_template_file = "/home/foo/public_html/drupal/" . drupal_get_path('module', 'foo') . "/emails/foo-new-member-email-template.txt"; $email_template_data = file_get_contents($email_template_file); fclose($email_template_fh); //We'll just pull the uid, since we have to run user_load anyway $query = "SELECT v.uid FROM {profile_values} v WHERE (v.value = 0) AND (v.fid = 7)"; $result = db_query(($query)); // Loop through the uids, loading profiles so as to access string replacement variables while ($item = db_fetch_object($result)) { $new_member = user_load($item->uid); $translation_key = array( // ... code that generates the string replacement array ... ); // Compose the email component of the message, and send to user $email_text = t($email_template_data, $translation_key); $language = user_preferred_language($new_member); // use member's language preference $params['subject'] = 'New Member Benefits - Welcome to FOO!'; $params['content-type'] = 'text/plain; charset=UTF-8; format=flowed;'; $params['content'] = $email_text; drupal_mail('foo', 'welcome_letter', $new_member->mail, $language, $params, '[email protected]'); // Mark the user's profile to indicate that the message was sent $change = array( // Rebuild all of the profile fields in this category, // since they'll be deleted otherwise 'profile_first_name' => $new_member->profile_first_name, 'profile_last_name' => $new_member->profile_last_name, 'profile_intro_email_sent' => 1); profile_save_profile($change, $new_member, "Membership Data"); } } }

    Read the article

  • PHP uploads and checking

    - by user147685
    Hi all, I want to update/insert file by using upload box to the database but before that it will check for the file type thats only pdf can be upload. Somthing wrong with the codes and i dont know what..Please help here are part of my updates codes: $id=$rs['id']; $qry = "SELECT a.faillampiran FROM {$CFG->prefix}ptk_lampiran a, {$CFG->prefix}ptk b WHERE a.ptkid=$id and b.id=$id"; $sql = get_records_sql($qry); if($_POST['check']){ $ext = pathinfo($faillampiran,PATHINFO_EXTENSION); $err = "Upload Only PDF File. "; if ($ext =='pdf'){ $qry="UPDATE {$CFG->prefix}ptk_lampiran SET faillampiran='".$faillampiran."' WHERE ptkid='".$_GET['id']."'"; $sql=mysql_query($qry); $qry = "SELECT a.faillampiran FROM {$CFG->prefix}ptk_lampiran a, {$CFG->prefix}ptk b WHERE b.id = '".$rs[id]."' AND a.ptkid = '".$rs[id]."' "; $sql = get_records_sql($qry); foreach($sql as $rs){ ?> <?=basename($rs->faillampiran); ?><br> <? } ?> <tr><td></td><td></td><td> <input type="file" size="50" name="faillampiran" alt="faillampiran" value= "<?=$faillampiran;?>" /> <input type="submit" name="edit" value="Muat naik fail ini" /><br /> </td></tr> } else { echo "<script>alert('$err')</script>"; } } else { foreach($sql as $rs){ ?> <?=basename($rs->faillampiran); ?><br> <? } ?> <input type="file" size="50" name="faillampiran" alt="faillampiran" value= "<?=$faillampiran;?>" /> <input type="submit" name="edit" value="Muat naik fail ini" /><br /> <?} Sori bout the messiness of the codes, im still a beginner in this languages. thx

    Read the article

  • Improving performance for WRITE operation on Oracle DB in Java

    - by Lucky
    I've a typical scenario & need to understand best possible way to handle this, so here it goes - I'm developing a solution that will retrieve data from a remote SOAP based web service & will then push this data to an Oracle database on network. Also, this will be a scheduled task that will execute every 15 minutes. I've event queues on remote service that contains the INSERT/UPDATE/DELETE operations that have been done since last retrieval, & once I retrieve the events for last 15 minutes, it again add events for next retrieval. Now, its just pushing data to Oracle so all my interactions are INSERT & UPDATE statements. There are around 60 tables on Oracle with some of them having 100+ columns. Moreover, for every 15 minutes cycle there would be around 60-70 Inserts, 100+ Updates & 10-20 Deletes. This will be an executable jar file that will terminate after operation & will again start on next 15 minutes cycle. So, I need to understand how should I handle WRITE operations (best practices) to improve performance for this application as whole ? Current Test Code (on every cycle) - Connects to remote service to get events. Creates a connection with DB (single connection object). Identifies the type of operation (INSERT/UPDATE/DELETE) & table on which it is done. After above, calls the respective method based on type of operation & table. Uses Preparedstatement with positional parameters, & retrieves each column value from remote service & assigns that to statement parameters. Commits the statement & returns to get event class to process next event. Above is repeated till all the retrieved events are processed after which program closes & then starts on next cycle & everything repeats again. Thanks for help !

    Read the article

  • Entity Framework: An object with the same key already exists in the objectstatemanager

    - by NealR
    I see that this question has been asked a lot, however I haven't found anything yet that solves the problem I'm having. Obviously i'm using the Entity Framework to perform an update to a record. Once the updates are complete, however, whenever I try to save I get the following error message: An object with the same key already exists in the objectstatemanager At first I was passing in a collection object from the view that contained a copy of the the ZipCodeTerritory model object zipToUpdate. I changed the code by pulling this object out and just sending in the relevant fields instead. However, I'm still getting the same error. What's also weird is the first time I run this code, it works fine. Any attempt after that I get the error. Controller Here is the code from the method calling the edit function public static string DescriptionOnly(ZipCodeIndex updateZip) { if (!string.IsNullOrWhiteSpace(updateZip.newEffectiveDate) || !string.IsNullOrWhiteSpace(updateZip.newEndDate)) { return "Neither effective or end date can be present if updating Territory Code only; "; } _updated = 0; foreach (var zipCode in updateZip.displayForPaging.Where(x => x.Update)) { ProcessAllChanges(zipCode, updateZip.newTerritory, updateZip.newStateCode, updateZip.newDescription, updateZip.newChannelCode); } _msg += _updated + " record(s) updated; "; return _msg; } And here is the method that actually does the updating. private static void ProcessAllChanges(ZipCodeTerritory zipToUpdate, string newTerritory, string newStateCode, string newDescription, string newChannelCode) { try { if (!string.IsNullOrWhiteSpace(newTerritory)) zipToUpdate.IndDistrnId = newTerritory; if (!string.IsNullOrWhiteSpace(newStateCode)) zipToUpdate.StateCode = newStateCode; if (!string.IsNullOrWhiteSpace(newDescription)) zipToUpdate.DrmTerrDesc = newDescription; if (!string.IsNullOrWhiteSpace(newChannelCode)) zipToUpdate.ChannelCode = newChannelCode; if (zipToUpdate.EndDate == DateTime.MinValue) zipToUpdate.EndDate = DateTime.MaxValue; _db.Entry(zipToUpdate).State = EntityState.Modified; _db.SaveChanges(); _updated++; } catch (DbEntityValidationException dbEx) { _msg += "Error during update; "; EventLog.WriteEntry("Monet", "Error during ProcessAllChanges: " + zipToUpdate.ToString() + " |EX| " + dbEx.Message); } catch (Exception ex) { _msg += "Error during update; "; EventLog.WriteEntry("Monet", "Error during ProcessAllChanges: " + zipToUpdate.ToString() + " |MESSAGE| " + ex.Message); } } EDIT The ZipCodeIndex object contains a list of ZipCodeTerritory model objects. These aren't being pulled from a linq query, but instead simply passed back to the controller from the view. Here is the signature of the controller method that starts the process: [HttpPost] public ActionResult Update(ZipCodeIndex updateZip, string button)

    Read the article

  • Run CGI in IIS 7 to work with GET without Requiring POST Request

    - by Mohamed Meligy
    I'm trying to migrate an old CGI application from an existing Windows 2003 server (IIS 6.0) where it works just fine to a new Windows 2008 server with IIS 7.0 where we're getting the following problem: After setting up the module handler and everything, I find that I can only access the CGI application (rdbweb.exe) file if I'm calling it via POST request (form submit from another page). If I just try to type in the URL of the file (issuing a GET request) I get the following error: HTTP Error 502.2 - Bad Gateway The specified CGI application misbehaved by not returning a complete set of HTTP headers. The headers it did return are "Exception EInOutError in module rdbweb.exe at 00039B44. I/O error 6. ". This is a very old application for one of our clients. When we tried to call the vendor they said we need to pay ~ $3000 annual support fee in order to start the talk about it. Of course I'm trying to avoid that! Note that: If we create a normal HTML form that submits to "rdbweb.exe", we get the CGI working normally. We can't use this as workaround though because some pages in the application link to "rdbweb.exe" with normal link not form submit. If we run "rdbweb.exe". from a Console (Command Prompt) Window not IIS, we get the normal HTML we'd typically expect, no problem. We have tried the following: Ensuring the CGI module mapped to "rdbweb.exe".in IIS has all permissions (read, write, execute) enabled and also all verbs are allowed not just specific ones, also tried allowing GET, POST explicitely. Ensuring the application bool has "enable 32 bit applications" set to true. Ensuring the website runs with an account that has full permissions on the "rdbweb.exe".file and whole website (although we know it "read", "execute" should be enough). Ensuring the machine wide IIS setting for "ISAPI and CGI Restrictions" has the full path to "rdbweb.exe".allowed. Making sure we have the latest Windows Updates (for IIS6 we found knowledge base articles stating bugs that require hot fixes for IIS6, but nothing similar was found for IIS7). Changing the module from CGI to Fast CGI, not working also Now the only remaining possibility we have instigated is the following Microsoft Knowledge Base article:http://support.microsoft.com/kb/145661 - Which is about: CGI Error: The specified CGI application misbehaved by not returning a complete set of HTTP headers. The headers it did return are: the article suggests the following solution: Modify the source code for the CGI application header output. The following is an example of a correct header: print "HTTP/1.0 200 OK\n"; print "Content-Type: text/html\n\n\n"; Unfortunately we do not have the source to try this out, and I'm not sure anyway whether this is the issue we're having. Can you help me with this problem? Is there a way to make the application work without requiring POST request? Note that on the old IIS6 server the application is working just fine, and I could not find any special IIS configuration that I may want to try its equivalent on IIS7.

    Read the article

  • Weird networking problem ( Linksys, Windows 7 )

    - by Rohit Nair
    Okay it's a bit tough to figure out where to start from, but here is the basic summary of the issue: During general internet usage, there are times when any attempt to visit a website stalls at "Waiting for somedomain.com". This problem occurs in Firefox, IE and Chrome. No website will load, INCLUDING the router configuration page at 192.168.1.1. Curiously, ping works fine, and other network apps such as MSN Messenger continue to work and I can send and receive messages. Disconnecting and reconnecting to the wireless network seems to fix the problem for a bit, but there are times when it relapses into not loading after every 2-3 http requests. Restarting the router seems to fix the issue, but it can crop up hours or days later. I have a CCNA cert and I know my way around the Windows family of operating systems, so I'm going to list all the things I've tried here. Other computers on the network seem to suffer the same problem, which makes me think it might be a specific problem with something in Win7. The random nature of this issue makes it a bit difficult to confirm, but I can definitely say that I have experienced this on the following systems: Windows 7 64-bit on my desktop Windows Vista 32-bit on my desktop ( the desktop has 2 wireless NICs and the problem existed on both ) Windows Vista 32-bit on my laptop ( both with wireless and wired ) Windows XP SP3 on another laptop ( both wireless and wired ) Using Wireshark to sniff packets seemed to indicate that although HTTP requests were being SENT out, no packets were coming in to respond to the HTTP request. However, other network apps continued to work i.e I would still receive IMs on Windows Live Messenger. Disabling IPV6 had no effect. Updating router firmware to the latest stock firmware by Linksys had no effect. Switching to dd-wrt firmware had no effect. By "no effect" I mean that although the restart required by firmware updates fixed the problem at the time, it still came back. A couple of weeks back, after a LOT of googling and flipping of various options, I figured it might be a case of router slowdown ( http://www.dd-wrt.com/wiki/index.php/Router%5FSlowdown ) caused by the fact that I occasionally run a torrent client. I tried changing the configuration as suggested in that router slowdown link, and restarted the router. However I have not run the torrent client for 12 days now, and yet I still randomly experience this problem. Currently the computer I am using is running Windows 7 64-bit. I would just like to reiterate some of the reasons that I was confused by the issue. Even the router config page at 192.168.1.1 would not load, indicating that it's not a problem with the WAN link, but probably a router issue or a local computer issue. For some reason, disconnecting and reconnecting to the wireless network immediately seems to fix the problem. Updating the router firmware, even switching to open source firmware did nothing. So it seemed to be a computer issue. On the other hand, I have not seen any mass outrage of people having networking problems with Windows 7 and Linksys routers, especially a problem of this sort, and I have tweaked every network setting I could think of. Although HTTP seems to have trouble, ping works fine, DNS lookups work fine, other networking apps work fine. However if I disconnect from Windows Live Messenger and try to reconnect, it fails to reconnect. So although it could receive data over the existing TCP/IP connection, trying to start a new one failed? Does anyone have any further ideas on debugging or fixing this issue? I am reasonably certain there are no viruses or other malicious apps on my network, and I am also reasonably certain that nobody is accessing my router without my consent. Router: Linksys WRT54G2 1.0 running dd-wrt firmware Wireless Card: Alfa AWUS036H OS: Windows 7 64-bit EDIT: I tried switching to a clean wireless channel free from interference, but the problem still persisted. I tried connecting directly with a cable, but the problem still persisted. Signed A very confused and bewildered geek whose knowledge seems to be useless in the face of this frustrating network issue.

    Read the article

  • Bypass BIOS password set by faulty Toshiba firmware on Satellite A55 laptop?

    - by Brian
    How can the CMOS be cleared on the Toshiba Satellite A55-S1065? I have this 7 year old laptop that has been crippled by a glitch in its BIOS: 'A "Password =" prompt may be displayed when the computer is turned on, even though no power-on password has been set. If this happens, there is no password that will satisfy the password request. The computer will be unusable until this problem is resolved. [..] The occurrence of this problem on any particular computer is unpredictable -- it may never happen, but it could happen any time that the computer is turned on. [..] Toshiba will cover the cost of this repair under warranty until Dec 31, 2010.' -Toshiba As they stated, this machine is "unusable." The escape key does not bypass the prompt (nor does any other key), thus no operating system can be booted and no firmware updates can be installed. After doing some research, I found solutions that have been suggested for various Toshiba Satellite models afflicted by this glitch: "Make arrangements with a Toshiba Authorized Service Provider to have this problem resolved." -Toshiba (same link). Even prior to the expiration of Toshiba's support ("repair under warranty until Dec 31, 2010"), there have been reports that this solution is prohibitively expensive, labor charges accruing even when the laptop is still under warranty, and other reports that are generally discouraging: "They were unable to fix it and the guy who worked on it said he couldn’t find the jumpers on the motherboard to clear the BIOS. I paid $39 for my troubles and still have the password problem." - Steve. Since the costs of the repairs can now exceed the value of the hardware, it would seem this is a DIY solution, or a non-solution (i.e. the hardware is trash). Build a Toshiba parallel loopback by stripping and soldering the wires on a DB25 plug to connect connect these pins: 1-5-10, 2-11, 3-17, 4-12, 6-16, 7-13, 8-14, 9-15, 18-25. -CGSecurity. According to a list of supported models on pwcrack, this will likely not work for my Satellite A55-1065 (as well as many other models of similar age). -pwcrack Disconnect the laptop battery for an extended period of time. Doesn't work, laptop sat in a closet for several years without the battery connected and I forgot about the whole thing for awhile. The poor thing. Clear CMOS by setting the proper jumper setting or by removing the CMOS (RTC) battery, or by short circuiting a (hidden?) jumper that looks like a pair of solder marks -various sources for various Satellite models: Satellite A105: "you will see C88 clearly labeled right next the jack that the wireless card plugs into. There are two little solder squares (approx 1/16") at this location" -kerneltrap Satellite 1800: "Underneath the RAM there is black sticker, peel off the black sticker and you will reveal two little solder marks which are actually 'jumpers'. Very carefully hold a flat-head screwdriver touching both points and power on the unit briefly, effectively 'shorting' this circuit." -shadowfax2020 Satellite L300: "Short the B500 solder pads on the system board." -Lester Escobar Satellite A215: "Short the B500 solder pads on the system board." -fixya Clearing the CMOS could resolve the issue, but I cannot locate a jumper or a battery on this board. Nothing that looks remotely like a battery can be removed (everything is soldered). I have looked closely at the area around the memory and do not see any obvious solder pads that could be a secret jumper. Here are pictures (click for full resolution) : Where is the jumper (or solder pads) to short circuit and wipe the CMOS on this board? Possibly related questions: Remove Toshiba laptop BIOS password? Password Problem Toshiba Satellite..

    Read the article

  • Need help with yum,python and php in CentOS. (I made a complete mess!)

    - by pek
    a while back I wanted to install some plugins for Trac but it required python 2.5 I tried installing it (I don't remember how) and the only thing I managed was to have two versions of python (2.4 and 2.5). Trac still uses the old version but the console uses 2.5 (python -V = Python 2.5.2). Anyway, the problem is not python, the problem is yum (which uses python). I am trying to upgrade my PHP version from 5.1.x to 5.2.x. I tried following this tutorial but when I reach the step with yum I get this error: >[root@XXX]# yum update Loading "installonlyn" plugin Setting up Update Process Setting up repositories Reading repository metadata in from local files Traceback (most recent call last): File "/usr/bin/yum", line 29, in ? yummain.main(sys.argv[1:]) File "/usr/share/yum-cli/yummain.py", line 94, in main result, resultmsgs = base.doCommands() File "/usr/share/yum-cli/cli.py", line 381, in doCommands return self.yum_cli_commands[self.basecmd].doCommand(self, self.basecmd, self.extcmds) File "/usr/share/yum-cli/yumcommands.py", line 150, in doCommand return base.updatePkgs(extcmds) File "/usr/share/yum-cli/cli.py", line 672, in updatePkgs self.doRepoSetup() File "/usr/share/yum-cli/cli.py", line 109, in doRepoSetup self.doSackSetup(thisrepo=thisrepo) File "/usr/lib/python2.4/site-packages/yum/__init__.py", line 338, in doSackSetup self.repos.populateSack(which=repos) File "/usr/lib/python2.4/site-packages/yum/repos.py", line 200, in populateSack sack.populate(repo, with, callback, cacheonly) File "/usr/lib/python2.4/site-packages/yum/yumRepo.py", line 91, in populate dobj = repo.cacheHandler.getPrimary(xml, csum) File "/usr/lib/python2.4/site-packages/yum/sqlitecache.py", line 100, in getPrimary return self._getbase(location, checksum, 'primary') File "/usr/lib/python2.4/site-packages/yum/sqlitecache.py", line 86, in _getbase (db, dbchecksum) = self.getDatabase(location, metadatatype) File "/usr/lib/python2.4/site-packages/yum/sqlitecache.py", line 82, in getDatabase db = self.makeSqliteCacheFile(filename,cachetype) File "/usr/lib/python2.4/site-packages/yum/sqlitecache.py", line 245, in makeSqliteCacheFile self.createTablesPrimary(db) File "/usr/lib/python2.4/site-packages/yum/sqlitecache.py", line 165, in createTablesPrimary cur.execute(q) File "/usr/lib/python2.4/site-packages/sqlite/main.py", line 244, in execute self.rs = self.con.db.execute(SQL) _sqlite.DatabaseError: near "release": syntax error Any help? Thank you. Update OK, so I've managed to update yum hoping it would solve my problems but now I get a slightly different version of the same error: [root@XXX]# yum -y update Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * addons: mirror.skiplink.com * base: www.gtlib.gatech.edu * epel: mirrors.tummy.com * extras: yum.singlehop.com * updates: centos-distro.cavecreek.net (process:30840): GLib-CRITICAL **: g_timer_stop: assertion `timer != NULL' failed (process:30840): GLib-CRITICAL **: g_timer_destroy: assertion `timer != NULL' failed Traceback (most recent call last): File "/usr/bin/yum", line 29, in ? yummain.user_main(sys.argv[1:], exit_code=True) File "/usr/share/yum-cli/yummain.py", line 309, in user_main errcode = main(args) File "/usr/share/yum-cli/yummain.py", line 178, in main result, resultmsgs = base.doCommands() File "/usr/share/yum-cli/cli.py", line 345, in doCommands self._getTs(needTsRemove) File "/usr/lib/python2.4/site-packages/yum/depsolve.py", line 101, in _getTs self._getTsInfo(remove_only) File "/usr/lib/python2.4/site-packages/yum/depsolve.py", line 112, in _getTsInfo pkgSack = self.pkgSack File "/usr/lib/python2.4/site-packages/yum/__init__.py", line 661, in <lambda> pkgSack = property(fget=lambda self: self._getSacks(), File "/usr/lib/python2.4/site-packages/yum/__init__.py", line 501, in _getSacks self.repos.populateSack(which=repos) File "/usr/lib/python2.4/site-packages/yum/repos.py", line 260, in populateSack sack.populate(repo, mdtype, callback, cacheonly) File "/usr/lib/python2.4/site-packages/yum/yumRepo.py", line 190, in populate dobj = repo_cache_function(xml, csum) File "/usr/lib/python2.4/site-packages/sqlitecachec.py", line 42, in getPrimary self.repoid)) TypeError: Can not create packages table: near "release": syntax error I'm guessing that this "release" thing has something to do with a repository, but I didn't find anything... I went to the sqlitecachec.py at line 42 which writes (line numbers added for convenience): 39: return self.open_database(_sqlitecache.update_primary(location, 40: checksum, 41: self.callback, 42: self.repoid)) Update 2 I think I found the problem. This post suggests that the problem is sqlite and not yum. The version of sqlite I have installed is 3.6.10 but I have no idea which version does python 2.4 uses. ld.so.config contains the following: include ld.so.conf.d/*.conf /usr/local/lib In folder /usr/local/lib I find a symbolic link named libsqlite3.so that points to libsqlite3.so.0.8.6 WHAT IS HAPPENING??????? :S

    Read the article

  • Useful Command-line Commands on Windows

    - by Sung Meister
    The aim for this Wiki is to promote using a command to open up commonly used applications without having to go through many mouse clicks - thus saving time on monitoring and troubleshooting Windows machines. Answer entries need to specify Application name Commands Screenshot (Optional) Shortcut to commands && - Command Chaining %SYSTEMROOT%\System32\rcimlby.exe -LaunchRA - Remote Assistance (Windows XP) appwiz.cpl - Programs and Features (Formerly Known as "Add or Remove Programs") appwiz.cpl @,2 - Turn Windows Features On and Off (Add/Remove Windows Components pane) arp - Displays and modifies the IP-to-Physical address translation tables used by address resolution protocol (ARP) at - Schedule tasks either locally or remotely without using Scheduled Tasks bootsect.exe - Updates the master boot code for hard disk partitions to switch between BOOTMGR and NTLDR cacls - Change Access Control List (ACL) permissions on a directory, its subcontents, or files calc - Calculator chkdsk - Check/Fix the disk surface for physical errors or bad sectors cipher - Displays or alters the encryption of directories [files] on NTFS partitions cleanmgr.exe - Disk Cleanup clip - Redirects output of command line tools to the Windows clipboard cls - clear the command line screen cmd /k - Run command with command extensions enabled color - Sets the default console foreground and background colors in console command.com - Default Operating System Shell compmgmt.msc - Computer Management control.exe /name Microsoft.NetworkAndSharingCenter - Network and Sharing Center control keyboard - Keyboard Properties control mouse(or main.cpl) - Mouse Properties control sysdm.cpl,@0,3 - Advanced Tab of the System Properties dialog control userpasswords2 - Opens the classic User Accounts dialog desk.cpl - opens the display properties devmgmt.msc - Device Manager diskmgmt.msc - Disk Management diskpart - Disk management from the command line dsa.msc - Opens active directory users and computers dsquery - Finds any objects in the directory according to criteria dxdiag - DirectX Diagnostic Tool eventvwr - Windows Event Log (Event Viewer) explorer . - Open explorer with the current folder selected. explorer /e, . - Open explorer, with folder tree, with current folder selected. F7 - View command history find - Searches for a text string in a file or files findstr - Find a string in a file firewall.cpl - Opens the Windows Firewall settings fsmgmt.msc - Shared Folders fsutil - Perform tasks related to FAT and NTFS file systems ftp - Transfers files to and from a computer running an FTP server service getmac - Shows the mac address(es) of your network adapter(s) gpedit.msc - Group Policy Editor gpresult - Displays the Resultant Set of Policy (RSoP) information for a target user and computer httpcfg.exe - HTTP Configuration Utility iisreset - To restart IIS InetMgr.exe - Internet Information Services (IIS) Manager 7 InetMgr6.exe - Internet Information Services (IIS) Manager 6 intl.cpl - Regional and Language Options ipconfig - Internet protocol configuration lusrmgr.msc - Local Users and Groups Administrator msconfig - System Configuration notepad - Notepad? ;) mmsys.cpl - Sound/Recording/Playback properties mode - Configure system devices more - Displays one screen of output at a time mrt - Microsoft Windows Malicious Software Removal Tool mstsc.exe - Remote Desktop Connection nbstat - displays protocol statistics and current TCP/IP connections using NBT ncpa.cpl - Network Connections netsh - Display or modify the network configuration of a computer that is currently running netstat - Network Statistics net statistics - Check computer up time net stop - Stops a running service. net use - Connects a computer to or disconnects a computer from a shared resource, or displays information about computer connections odbcad32.exe - ODBC Data Source Administrator pathping - A traceroute that collects detailed packet loss stats perfmon - Opens Reliability and Performance Monitor ping - Determine whether a remote computer is accessible over the network powercfg.cpl - Power management control panel applet quser - Display information about user sessions on a terminal server qwinsta - See disconnected remote desktop sessions reg.exe - Console Registry Tool for Windows regedit - Registry Editor rasdial - Connects to a VPN or a dialup network robocopy - Backup/Restore/Copy large amounts of files reliably rsop.msc - Resultant Set of Policy (shows the combined effect of all group policies active on the current system/login) runas - Run specific tools and programs with different permissions than the user's current logon provides sc - Manage anything you want to do with services. schtasks - Enables an administrator to create, delete, query, change, run and end scheduled tasks on a local or remote system. secpol.msc - Local Security Settings services.msc - Services control panel set - Displays, sets, or removes cmd.exe environment variables. set DIRCMD - Preset dir parameter in cmd.exe start - Starts a separate window to run a specified program or command start. - opens the current directory in the Windows Explorer. shutdown.exe - Shutdown or Reboot a local/remote machine subst.exe - Associates a path with a drive letter, including local drives systeminfo -Displays a comprehensive information about the system taskkill - terminate tasks by process id (PID) or image name tasklist.exe - List Processes on local or a remote machine taskmgr.exe - Task Manager telephon.cpl - Telephone and Modem properties timedate.cpl - Date and Time title - Change the title of the CMD window you have open tracert - Trace route wmic - Windows Management Instrumentation Command-line winver.exe - Find Windows Version wscui.cpl - Windows Security Center wuauclt.exe - Windows Update AutoUpdate Client

    Read the article

  • Why do we get a sudden spike in response times?

    - by Christian Hagelid
    We have an API that is implemented using ServiceStack which is hosted in IIS. While performing load testing of the API we discovered that the response times are good but that they deteriorate rapidly as soon as we hit about 3,500 concurrent users per server. We have two servers and when hitting them with 7,000 users the average response times sit below 500ms for all endpoints. The boxes are behind a load balancer so we get 3,500 concurrents per server. However as soon as we increase the number of total concurrent users we see a significant increase in response times. Increasing the concurrent users to 5,000 per server gives us an average response time per endpoint of around 7 seconds. The memory and CPU on the servers are quite low, both while the response times are good and when after they deteriorate. At peak with 10,000 concurrent users the CPU averages just below 50% and the RAM sits around 3-4 GB out of 16. This leaves us thinking that we are hitting some kind of limit somewhere. The below screenshot shows some key counters in perfmon during a load test with a total of 10,000 concurrent users. The highlighted counter is requests/second. To the right of the screenshot you can see the requests per second graph becoming really erratic. This is the main indicator for slow response times. As soon as we see this pattern we notice slow response times in the load test. How do we go about troubleshooting this performance issue? We are trying to identify if this is a coding issue or a configuration issue. Are there any settings in web.config or IIS that could explain this behaviour? The application pool is running .NET v4.0 and the IIS version is 7.5. The only change we have made from the default settings is to update the application pool Queue Length value from 1,000 to 5,000. We have also added the following config settings to the Aspnet.config file: <system.web> <applicationPool maxConcurrentRequestsPerCPU="5000" maxConcurrentThreadsPerCPU="0" requestQueueLimit="5000" /> </system.web> More details: The purpose of the API is to combine data from various external sources and return as JSON. It is currently using an InMemory cache implementation to cache individual external calls at the data layer. The first request to a resource will fetch all data required and any subsequent requests for the same resource will get results from the cache. We have a 'cache runner' that is implemented as a background process that updates the information in the cache at certain set intervals. We have added locking around the code that fetches data from the external resources. We have also implemented the services to fetch the data from the external sources in an asynchronous fashion so that the endpoint should only be as slow as the slowest external call (unless we have data in the cache of course). This is done using the System.Threading.Tasks.Task class. Could we be hitting a limitation in terms of number of threads available to the process?

    Read the article

  • Dhcpd Daemon is trying to lease itself?

    - by tommieb75
    I have a Slackware Linux 13.0 box with two interfaces, eth0 and eth1. I have set this box up to be on the 192.168.1.0/24 network, with subnet mask of 255.255.255.0. I am trying to run a dhcpd server on this box to service two interfaces above, so I subnetted the 192.168.1.0/24 network into two subnets. For eth0 192.168.1.1, subnet mask 255.255.255.128, broadcast mask 192.168.1.127. For eth1 192.168.1.129, subnet mask 255.255.255.128, broadcast mask 192.168.1.255. Both the interfaces are assigned manually. eth0 Link encap:Ethernet HWaddr 00:00:00:00:00:00 inet addr:192.168.1.1 Bcast:192.168.1.127 Mask:255.255.255.128 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:39 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:1404 (1.3 KiB) Interrupt:11 Base address:0x8000 Memory:faffc000-faffcfff eth1 Link encap:Ethernet HWaddr 00:00:00:00:00:00 inet addr:192.168.1.128 Bcast:192.168.1.255 Mask:255.255.255.128 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:10003 errors:0 dropped:0 overruns:0 frame:0 TX packets:13286 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1589229 (1.5 MiB) TX bytes:9900005 (9.4 MiB) Interrupt:11 Here is the dhcpd.conf set up authoritative; ddns-update-style interim; ignore client-updates; subnet 192.168.1.0 netmask 255.255.255.128 { range 192.168.1.2 192.168.1.126; default-lease-time 86400; max-lease-time 86400; option routers 192.168.1.1; option ip-forwarding off; option domain-name-servers 208.67.222.222, 208.67.220.220; option broadcast-address 192.168.1.127; option subnet-mask 255.255.255.128; } subnet 192.168.1.128 netmask 255.255.255.128 { range 192.168.1.129 192.168.1.254; default-lease-time 86400; max-lease-time 86400; option routers 192.168.1.1; option ip-forwarding off; option domain-name-servers 208.67.222.222, 208.67.220.220; option broadcast-address 192.168.1.255; option subnet-mask 255.255.255.128; } This is what is showing in the log Apr 10 18:09:58 inspiron8600 dhcpd: DHCPDISCOVER from 00:00:00:00:00:00 (inspiron8600) via eth1 Apr 10 18:09:58 inspiron8600 dhcpd: DHCPOFFER on 192.168.1.131 to 00:00:00:00:00:00 (inspiron8600) via eth1 Apr 10 18:10:01 inspiron8600 dhcpcd[3832]: eth1: adding IP address 169.254.153.6/16 This is happening spuriously, and the log gets filled up with nonsense..so my question is this: How do I stop this from happening? And why would it be trying to give itself a lease? I am sure I have missed something but cannot see it and would appreciate a pair of eyes from the community to spot the obvious flaw!

    Read the article

  • Apache doesn't run multiple requests

    - by Reinderien
    I'm currently running this simple Python CGI script to test rudimentary IPC: #!/usr/bin/python -u import cgi, errno, fcntl, os, os.path, sys, time print("""Content-Type: text/html; charset=utf-8 <!doctype html> <html lang="en"> <head> <meta charset="utf-8" /> <title>IPC test</title> </head> <body> """) ftempname = '/tmp/ipc-messages' master = not os.path.exists(ftempname) if master: fmode = 'w' else: fmode = 'r' print('<p>Opening file</p>') sys.stdout.flush() ftemp = open(ftempname, fmode) print('<p>File opened</p>') if master: print('<p>Operating as master</p>') sys.stdout.flush() for i in range(10): print('<p>' + str(i) + '</p>') sys.stdout.flush() time.sleep(1) ftemp.close() os.remove(ftempname) else: print('<p>Operating as a slave</p>') ftemp.close() print(""" </body> </html>""") The 'server-push' portion works; that is, for the first request, I do see piecewise updates. However, while the first request is being serviced, subsequent requests are not started, only to be started after the first request has finished. Any ideas on why, and how to fix it? Edit: I see the same non-concurrent behaviour with vanilla PHP, running this: <!doctype html> <html lang="en"> <!-- $Id: $--> <head> <meta charset="utf-8" /> <title>IPC test</title> </head> <body> <p> <?php function echofl($str) { echo $str . "</b>\n"; ob_flush(); flush(); } define('tempfn', '/tmp/emailsync'); if (file_exists(tempfn)) $perms = 'r+'; else $perms = 'w'; assert($fsync = fopen(tempfn, $perms)); assert(chmod(tempfn, 0600)); if (!flock($fsync, LOCK_EX | LOCK_NB, $wouldblock)) { assert($wouldblock); $master = false; } else $master = true; if ($master) { echofl('Running as master.'); assert(fwrite($fsync, 'content') != false); assert(sleep(5) == 0); assert(flock($fsync, LOCK_UN)); } else { echofl('Running as slave.'); echofl(fgets($fsync)); } assert(fclose($fsync)); echofl('Done.'); ?> </p> </body> </html>

    Read the article

  • how to uninstall mariadb and re-install mysql ? Mysql install turns into mariadb install

    - by Suma
    I recently upgraded my centos system via the desktop. mistake! I had mariadb, phpmyadmin working just fine before - but after the upgrade they stopped. I frantically googled and tried to follow some tutorials about mariadb * mysql reinstall untill I came to this one: http://centosforge.com/node/how-replace-mysql-mariadb-centos-6-including-mysql-uninstall-instructions-and-yum-install I executed this command to remove all of mysql: yum remove mysql-server mysql-libs mysql-devel mysql* and then tried to reinstall mysql: as below - it crashes with errors as follows: ***************************************************************** [root@localhost ~]# yum install mysql-server mysql mysql-devel ***************************************************************** Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * base: centos.serverspace.co.uk * extras: centos.serverspace.co.uk * rpmforge: www.mirrorservice.org * updates: mirror.rmg.io Setting up Install Process Package mysql-server is obsoleted by MariaDB-server, trying to install MariaDB-server-5.5.29-1.i686 instead Package mysql is obsoleted by MariaDB-server, trying to install MariaDB-server-5.5.29-1.i686 instead Package mysql-devel is obsoleted by MariaDB-devel, trying to install MariaDB-devel-5.5.29-1.i686 instead Resolving Dependencies --> Running transaction check ---> Package MariaDB-devel.i686 0:5.5.29-1 set to be updated --> Processing Dependency: MariaDB-common for package: MariaDB-devel ---> Package MariaDB-server.i686 0:5.5.29-1 set to be updated --> Processing Dependency: libssl.so.10 for package: MariaDB-server --> Processing Dependency: libcrypto.so.10 for package: MariaDB-server --> Running transaction check ---> Package MariaDB-common.i686 0:5.5.29-1 set to be updated --> Processing Dependency: MariaDB-compat for package: MariaDB-common ---> Package MariaDB-server.i686 0:5.5.29-1 set to be updated --> Processing Dependency: libssl.so.10 for package: MariaDB-server --> Processing Dependency: libcrypto.so.10 for package: MariaDB-server --> Running transaction check ---> Package MariaDB-compat.i686 0:5.5.29-1 set to be updated ---> Package MariaDB-server.i686 0:5.5.29-1 set to be updated --> Processing Dependency: libssl.so.10 for package: MariaDB-server --> Processing Dependency: libcrypto.so.10 for package: MariaDB-server --> Finished Dependency Resolution MariaDB-server-5.5.29-1.i686 from mariadb has depsolving problems --> Missing Dependency: libcrypto.so.10 is needed by package MariaDB-server-5.5.29-1.i686 (mariadb) MariaDB-server-5.5.29-1.i686 from mariadb has depsolving problems --> Missing Dependency: libssl.so.10 is needed by package MariaDB-server-5.5.29-1.i686 (mariadb) Error: Missing Dependency: libcrypto.so.10 is needed by package MariaDB-server-5.5.29-1.i686 (mariadb) Error: Missing Dependency: libssl.so.10 is needed by package MariaDB-server-5.5.29-1.i686 (mariadb) You could try using --skip-broken to work around the problem You could try running: package-cleanup --problems package-cleanup --dupes rpm -Va --nofiles --nodigest [root@localhost ~] If I now try to install libssl.10, i get asked to install glibc libraries. 2.17 and 2.7 - other discussions have said to stay clear of the as this will explode my system - I tried download 2.17 and it's huge - took ages to unzip. Could someone please help me to completelty remove maraidb and install mysql - so that I don't get the above errors and pushed over to mariadb when I run: yum install mysql-server mysql mysql-devel There are tons of material on how to install mariadb - but none i found so far that plainly explains how to go backwards to mysql.

    Read the article

  • Active Directory Time Synchronisation - Time-Service Event ID 50

    - by George
    I have an Active Directory domain with two DCs. The first DC in the forest/domain is Server 2012, the second is 2008 R2. The first DC holds the PDC Emulator role. I sporadically receive a warning from the Time-Service source, event ID 50: The time service detected a time difference of greater than %1 milliseconds for %2 seconds. The time difference might be caused by synchronization with low-accuracy time sources or by suboptimal network conditions. The time service is no longer synchronized and cannot provide the time to other clients or update the system clock. When a valid time stamp is received from a time service provider, the time service will correct itself. Time sync in the domain is configured with the second DC to synchronise using the /syncfromflags:DOMHIER flag. The first DC is configured to sync time using a /syncfromflags:MANUAL /reliable:YES, from a peerlist consisting of a number of UK based stratum 2 servers, such as ntp2d.mcc.ac.uk. I'm confused why I receive this event warning. It implies that my PDC emulator cannot synchronise time with a supposedly reliable external time source, and it quotes a time difference of 5 seconds for 900 seconds. It's worth also mentioning that I used to use a UK pool from ntp.org but I would receive the warning much more often. Since updating to a number of UK based academic time servers, it seems to be more reliable. Can someone with more experience shed some light on this - perhaps it is purely transient? Should I disregard the warning? Is my configuration sound? EDIT: I should add that the DCs are virtual, and installed on two separate VMware ESXi/vSphere physical hosts. I can also confirm that as per MDMarra's comment and best practice, VMware timesync is disabled, since: c:\Program Files\VMware\VMware Tools\VMwareToolboxCmd.exe timesync status returns Disabled. EDIT 2 Some strange new issue has cropped up. I've noticed a pattern. Originally, the event ID 50 warnings would occur at about 1230pm each day. This is interesting since our veeam backup happens at 12 midday. Since I made the changes discussed here, I now receive an event ID 51 instead of 50. The new warning says that: The time sample received from peer server.ac.uk differs from the local time by -40 seconds (Or approximately 40 seconds). This has happened two days in a row. Now I'm even more confused. Obviously the time never updates until I manually intervene. The issue seems to be related to virtualisation and veeam. Something may be occuring when veeam is backing up the PDCe. Any suggestions? UPDATE & SUMMARY msemack's excellent list of resources below (the accepted answer) provided enough information to correctly configure the time service in the domain. This should be the first port of call for any future people looking to verify their configuration. The final "40 second jump" issue I have resolved (there are no more warnings) through adjusting the VMware time sync settings as noted in the veeam knowledge base article here: http://www.veeam.com/kb1202 In any case, should any future reader use ESXi, veeam or not, the resources here are an excellent source of information on the time sync topic and msemack's answer is particularly invaluable.

    Read the article

  • Upgrading PHP from 5.1 to 5.2 on CentOS 5.4

    - by andufo
    i'm trying to upgrade php 5.1 to 5.2 on a CentOS 5.4 I use: yum upgrade php The result is this (check out the last part): [root@mail httpd]# yum update php Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * addons: mirror.raystedman.net * base: mirrors.serveraxis.net * centosplus: mirrors.tummy.com * contrib: mirror.raystedman.net * extras: mirror.raystedman.net * updates: mirrors.netdna.com Setting up Update Process Resolving Dependencies --> Running transaction check --> Processing Dependency: php = 5.1.6-27.el5 for package: php-devel --> Processing Dependency: php = 5.1.6 for package: php-eaccelerator ---> Package php.x86_64 0:5.2.10-1.el5.centos set to be updated --> Processing Dependency: php-cli = 5.2.10-1.el5.centos for package: php --> Processing Dependency: php-common = 5.2.10-1.el5.centos for package: php --> Running transaction check --> Processing Dependency: php = 5.1.6 for package: php-eaccelerator ---> Package php-cli.x86_64 0:5.2.10-1.el5.centos set to be updated --> Processing Dependency: php-common = 5.1.6-27.el5 for package: php-xml --> Processing Dependency: php-common = 5.1.6-27.el5 for package: php-pdo --> Processing Dependency: php-common = 5.1.6-27.el5 for package: php-gd --> Processing Dependency: php-common = 5.1.6-27.el5 for package: php-ldap --> Processing Dependency: php-common = 5.1.6-27.el5 for package: php-mbstring --> Processing Dependency: php-common = 5.1.6-27.el5 for package: php-mysql --> Processing Dependency: php-common = 5.1.6-27.el5 for package: php-imap ---> Package php-common.x86_64 0:5.2.10-1.el5.centos set to be updated ---> Package php-devel.x86_64 0:5.2.10-1.el5.centos set to be updated --> Running transaction check --> Processing Dependency: php = 5.1.6 for package: php-eaccelerator ---> Package php-gd.x86_64 0:5.2.10-1.el5.centos set to be updated ---> Package php-imap.x86_64 0:5.2.10-1.el5.centos set to be updated ---> Package php-ldap.x86_64 0:5.2.10-1.el5.centos set to be updated ---> Package php-mbstring.x86_64 0:5.2.10-1.el5.centos set to be updated ---> Package php-mysql.x86_64 0:5.2.10-1.el5.centos set to be updated ---> Package php-pdo.x86_64 0:5.2.10-1.el5.centos set to be updated ---> Package php-xml.x86_64 0:5.2.10-1.el5.centos set to be updated --> Finished Dependency Resolution php-eaccelerator-5.1.6_0.9.5.2-4.el5.rf.x86_64 from installed has depsolving problems --> Missing Dependency: php = 5.1.6 is needed by package php-eaccelerator-5.1.6_0.9.5.2-4.el5.rf.x86_64 (installed) Error: Missing Dependency: php = 5.1.6 is needed by package php-eaccelerator-5.1.6_0.9.5.2-4.el5.rf.x86_64 (installed) You could try using --skip-broken to work around the problem You could try running: package-cleanup --problems package-cleanup --dupes rpm -Va --nofiles --nodigest The program package-cleanup is found in the yum-utils package. [root@mail httpd]# What are the consequences of using --skip-broken? Any recommendations?

    Read the article

  • BSOD & System Failure after trying to install a new RAM

    - by Praveen Kumar
    I have updated the question with sections, so that people won't find it difficult to read. Basic System Information Let me give a basic introduction on my system. I have a system of following configuration: Processor: Intel(R) Core(TM) i7-2600 CPU @ 3.40GHz 3.40GHz RAM: Corsair Vengeance - 4GB Single Module DDR3 Memory Kit (CMZ4GX3M1A1600C9) x 2 OS: Windows 7 Ultimate, SP1 Build 7601 HDD: 1 TB Seagate 7200 RPM The Problem It was working fine for about an year. Yesterday I planned to increase my RAM to 16 GB by putting another set of two Corsair Vengeance - 4GB Single Module DDR3 Memory Kit (CMZ4GX3M1A1600C9). I got it from an authorized reseller and also, the RAM was fitted by a service engineer only. After the RAM was fit (all the four), the system failed to start, with an error code of 0x000000f4. The complete information of it is: Problem signature: Problem Event Name: BlueScreen OS Version: 6.1.7601.2.1.0.256.1 Locale ID: 16393 Additional information about the problem: BCCode: f4 BCP1: 0000000000000003 BCP2: FFFFFA8008A39060 BCP3: FFFFFA8008A39340 BCP4: FFFFF800037C8510 OS Version: 6_1_7601 Service Pack: 1_0 Product: 256_1 Files that help describe the problem: C:\Windows\Minidump\093012-13041-01.dmp C:\Users\Praveen Kumar\AppData\Local\Temp\WER-30716-0.sysdata.xml Read our privacy statement online: http://go.microsoft.com/fwlink/?linkid=104288&clcid=0x0409 If the online privacy statement is not available, please read our privacy statement offline: C:\Windows\system32\en-US\erofflps.txt Another Problem We first thought that it was the RAM, which caused the issue. So I returned the RAMs and now my computer configuration is exactly how it was the previous day. But, following the removal of the RAM, I also had several crashes after that. One suspicious thing was with an error code c0000134: STOP: c0000135 The program can’t start because %hs is missing from your computer . Try resintalling the program to fix this problem. After reading contents from this, this and this, which were never my case, they didn't help me. But I didn't receive any more STOP c0000134 messages. But this 0x000000f4 keeps on coming. I am writing from the same system and it allows me to work for say, half an hour max. Then I hear a device disconnect sound, the one you hear in Windows 7, when a USB Mass Storage Device is plugged out. Immediately following that, my screen goes blank and I get 0x000000f4 blue screen. Okay, now I am really concerned about my Hard Disk data, but I have no clue if there is a problem with the HDD. My Question What all files do I need to submit for your reference? Can this issue be fixed? I am getting more time if I remove my RAM, clean it and then put it back. Weird! Hope I have given the necessary information to help you guys. Thanks in advance. Minidumps I have uploaded all the Minidump DMP files from C:\Windows\Minidump folder here: http://www.praveen-kumar.com/Minidumps.zip Let me know if you face any issues in accessing it. Will be able to share elsewhere. Updates 30-Sep-2012 10:15 AM IST: When I keep the system cover opened, pressed the HDD Cable well, it is allowing me to be on for about half an hour, I guess? Also, I feel that the CPU fan speed is kind of slow. It rotates at around 900 RPM, but the CPU Temperature is not more than 70° C. 30-Sep-2012 10:30 AM IST: My Modem (Beetel 220BX ADSL2+ Router) failed. I have no idea how it is related to this issue, but I thought that I need to document this too. I really have a bad day here. 30-Sep-2012 11:00 AM IST: System still running fine, with the cabinet cover open, now for about an hour. 30-Sep-2012 12:00 PM IST: I shut down the system and closed the cabinet. Started the system, and it hung after giving the password. After a few minutes, got the same 0x000000f4 error. So, while it is in the upright position, fixed the Hard Disk cable and now it is booting fine. Waiting for more observations and answers.

    Read the article

  • WDS DHCP same server on Windows Server 2008

    - by Richard
    I have been struggling with a problem on my Windows Server 2008 for the past 4 - 5 hours and cannot figure out whats wrong. I have tried pretty much everything that I found on google and all the links are purple. Hopefully you guys can help me. I am running a Windows Server 2008 Standard edition with the latest updates as of today. Furthermore I am running a Windows Server 2003. Both are virtual machines on my ESXi 5 server. My network is: 192.168.10.0/24 W2k8: 192.168.10.251 is the PDC running ADS, DHCP and WDS W2k3: 192.168.10.253 AND 192.168.1.175 running Routing and Remote Access and ISA 2006 Enterprise In my internal network (192.168.10.0/24) I have my client machine (192.168.10.10) that runs a VMWare Workstation. I am trying to deploy Windows 7 Home Premium to a virtual machine on my VMWorkstation via PXE. I have set the Workstation's VM network adapter to "bridged" so that it uses the physical network adapter and is connected to my internal network. The DHCP pool is configured to give IP addresses from 192.168.10.10-192.168.10.15 (works for normal clients and is not used up) When I start my VM with the PXE I get the error: PXE-E52:proxyDHCP offers were received. No DHCP offers were received Apperently this means "that means that WDS responded but the DHCP server did not." People suggested to direct the traffic to both WDS and DHCP on the router, since everything is on the same subnet there is no need for that as the broadcast is seen by everyone (WDS and DHCP) No reservation for the virtual mac addrs is made on the DHCP. Furthermore it was suggested to configure the DHCP options: Option 60= PXEClient Option 66= WDS server name or IP address Option 67= Boot file name However, this is not recommended by Microsoft, I tried it and it did not solve my problem. The configuration on the WDS (My System is German therefore the actual naming might be different): PXE response tab: PXE responses is set to "ALL (known and unknown)" DHCP Tab: Do not listen to port 67 is NOT ticked - if I tick this I do not get any responses and the PXE errors gets PXE-E51 that neither DHCP or proxyDHCP were received DHCP-Option 60 for "PXEClient" is ticked The confusing part here is that it is advised in the tab to tick the first option since it is on the same server. Network Configuration Tab: Use the following IP-Address range for Multicast-IP-Address: 224.0.1.0 - 224.0.10.0 Thats not the default one, however it is in the allowed range. The UDP port range is the default since it is not advised to change them. I tried to change the "networkprofile" from 100mbits/1gbits and custom. I am running a 1gbit network with CAT6 cables and 1gbit netgear switch 5 ports. Everything is configured to use 1gbit. The WDS is authorised for the DHCP server. My ISA 2006 configuration: For the internal networking i have configured the following policy array: Allow protocols on internal network including the w2k3 host: 67,68,53,ICMP, 4011 UDP receive, 64001-6500 UDP send receive, 69 UDP send Routing and Remote Access I tried the DHCP relay agent configuration that was suggested as well, but that did not work I would highly appreciate anykind of help because I am pretty much done here with my nerves. Thank you very much in advance.

    Read the article

  • I cut-to-move DCIM folder to ext SD when an auto android OS update popped up b4 I could choose target - Cannot recover 200+ photos

    - by ZeroG
    I was downloading my Exhibit II's DCIM camera folder (with month's of photos inside) to its external SD card, in order to transfer them into my laptop. In my overconfidence, I hurriedly chose cut-to-move (rather than copy-to-move) when KABOOM! —an automatic Android OS update popped up before I could choose the target!!! I figured everything was in cache & calmly tried to go through with the update. But that was not a typically seamless event. It showed downloading icon but hmm… since I rooted the phone it brought the command line up & recovery sequence. But neither Android nor I had yet downloaded any alternate custom ROM Files to internal SD to update from! So were they trying to make me unroot my phone by giving me some bogus update on the fly or just give me a hard time in trying to hand me down an unrooted ROM that I'd have to figure out how to root again? Yes, I know there was that blurb about overwriting a file of the same name but I was trying to shake the darn stubborn update being forced on my phone during this precarious moment. I thought I had frozen or turned off all those auto-updates previously. Anyway, phones are small & fingers are big (sigh)... I tried to reboot into safe mode but the resultant photo file was partially overwritten (200 files had names but Zero bytes in them). I thought maybe it was still hung in cache or deposited somewhere else but I have searched everywhere with file managers. Since I did not have Titanium backing up camera, photo folder or gallery, I cannot recover 200+ photos. Dumb. You can understand my dilemma as I am involved in the arts & although just a camera phone, most of these photos were historic & aesthetic or at least as to subject matter. Photo-ops don't reoccur. I have tried a couple of recovery apps from the market like Search Duplicates & Recover to no avail. I was only able to salvage stuff I'd sent out in messages. I've got several decades in computers & this is such a miserable beginner's piece of bad luck I can't believe it happened to me. They were precious photos! Yes, I turned on Titanium since & yes I even tried USB to laptop recoveries. Being on a MacBookPro I'm trying androidfiletransfer.dmg, but I'd have to upgrade to Peach Sunrise to get above Android 3.0 for that App to recognize the phone via USB & the programmer says installation zeros your data, so that pretty much toasts any secret hidden places where these photos may have been deposited. Don't want to do that & am still trying to find them. They certainly didn't make it to my external SD Card. If any of you techies out there know anything, please help & thanks. Despite decades of being in computing, unfamiliar & ever-changing hard or software can humble even the most seasoned veterans.

    Read the article

  • Tuning performance of Ubuntu 10.04 on Compaq Evo W4000.

    - by Fantomas
    Hi, I got this computer free and installed Ubuntu 10.04 on it + updates, plus followed the following tutorial all the way: http://www.unixmen.com/linux-tutorials/937-things-to-do-after-installing-ubuntu-1004-lts-lucid-lynx I love the Docky which comes with it, but the computer has been running rather slowly. The System: kernel 2.6.32-22-generic Gnome 2.30.0 (I like Gnome!) Memory: 1GB Processor: Intel (R) Pentium (R) 4 CPU 1700 MHz (needless to say, it is 32 bit). I think I dedicated 128 Mb to video memory while installing, but cannot find this setting now. I did also install an NVidia driver for the 3D card, so I probably want to reclaim that memory back. I want to trim the fat but I also want to keep some of the sex appeal of Ubuntu 10.04. I will gift this computer to a friend, who will use it for Internet, music, videos, word processing, Skype and instant messaging - he is non-technical, so this hardware and Linux should work for him; I just need to speed it up while keeping the good software and having a nice UI. I sort of know my way around Linux, but not that well. Feel free to ask me to run particular commands if you want more info. For starters, here are the services below. Which ones can I kill and how? What else can go? There is no need to run ssh or ftp or http or ntp servers. As I said before, this computer is for non-technical person. There is also absolutely no bluetooth or wireless networking needed - it will feed off a regular ethernet cable. What I do not want to do is reinstall some other distro or recompile a kernel. I want to make it 80% perfect spending 20% of the energy :) Thanks! $ service --status-all [ ? ] acpi-support [ ? ] acpid [ ? ] alsa-mixer-save [ ? ] anacron [ - ] apparmor [ ? ] apport [ ? ] atd [ ? ] avahi-daemon [ ? ] binfmt-support [ - ] bluetooth [ - ] bootlogd [ - ] brltty [ ? ] console-setup [ ? ] cron [ + ] cups [ ? ] dbus [ ? ] dmesg [ ? ] dns-clean [ ? ] failsafe-x [ - ] fancontrol [ ? ] gdm [ - ] grub-common [ ? ] hostname [ ? ] hwclock [ ? ] hwclock-save [ ? ] irqbalance [ - ] kerneloops [ ? ] killprocs [ - ] lm-sensors [ ? ] module-init-tools [ ? ] network-interface [ ? ] network-interface-security [ ? ] network-manager [ ? ] networking [ ? ] ondemand [ ? ] pcmciautils [ ? ] plymouth [ ? ] plymouth-log [ ? ] plymouth-splash [ ? ] plymouth-stop [ ? ] pppd-dns [ ? ] procps [ + ] pulseaudio [ ? ] rc.local [ - ] rsync [ ? ] rsyslog [ - ] saned [ ? ] screen-cleanup [ ? ] sendsigs [ ? ] speech-dispatcher [ ? ] stop-bootlogd [ ? ] stop-bootlogd-single [ ? ] udev [ ? ] udev-finish [ ? ] udevmonitor [ ? ] udevtrigger [ ? ] ufw [ ? ] umountfs [ ? ] umountnfs.sh [ ? ] umountroot [ ? ] unattended-upgrades [ - ] urandom [ + ] winbind [ ? ] wpa-ifupdown [ - ] x11-common

    Read the article

< Previous Page | 256 257 258 259 260 261 262 263 264 265 266 267  | Next Page >