Search Results

Search found 24891 results on 996 pages for 'update grub'.

Page 195/996 | < Previous Page | 191 192 193 194 195 196 197 198 199 200 201 202  | Next Page >

  • Multiplayer approach for tablets on wi-fi (FPS/TPS)? Server authority, etc

    - by Fraggle
    Looking for some guidance or what has worked well for others in implementing a multiplayer FPS/TPS type game on tablets (probably just 2-6 players at a time). The main issue being that tablets/phones are typically "less" connected than say a console or pc might be. And therefore, my thought is that to have complete Server authority of everything is not going to work. But maybe I'm off base on that. So I guess I'm struggling with what (if anything) should happen on a central server and what should happen locally. Or is centralized approach even needed? Some approaches I might do: Player movement : my thought is to control this locally (player-owner) and update server with positon (which then sends out to other clients). Use client side prediction for opponent players so that connection loss will not show a plane for example stop in mid air. Server will send update and try to smoothly correct an opponent player position to server updated one.But don't update owners position on owners device from server. Powerups (health kit/ammo/coins/etc) : need to see them disappear immediately, so do it locally. Add the health locally, but perhaps allow for server correction. If server doesn't see player near that powerup, reject the powerup and adjust server health for player. Fire weapons: Have to see it happen right away, so fire locally, create local bullet and send on its way. Send rpc to server so that this player on other clients also fires. Hit detection: Get's trickier. Make bullet/projectile disappear locally, and perhaps perform local hit animations (shaking, whatever). non-authoritative approach= take the damage locally and send rpc to server or others to update health and inform of hit. Authoritative approach-Don't take the damage, or adjust health. Server will do that if it detects a hit. Anyway that's my current thought stream. Let me know what you think of the above or what has worked for you.

    Read the article

  • Can you boot an Acer Aspire One from an SD card when no BIOS is available?

    - by henrijs
    Is it possible to boot the Acer Aspire One PC from an SD card? I have bricked an Aspire One, but it does not even start the BIOS. Aspire One have this issue and a BIOS update usually work and it helped me once in the past, but this time it's all over, and the BIOS update fails. It still reads the SD card with the magic Ctrl + Esc shortcut used to launch the BIOS update. Can I trick the computer into booting somehow using this shortcut?

    Read the article

  • Why am I getting this error while installing gnome?

    - by Sreehari Rajendran
    i have raring ringtail. I installed gnome a couple days ago. I try to install extensions but the site says I don't have the latest version. I type the command sudo apt-get install gnome-shell and I get this error Reading package lists... Done Building dependency tree Reading state information... Done gnome-shell is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 138 not upgraded. 1 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? y Setting up initramfs-tools (0.103ubuntu0.7) ... update-initramfs: deferring update (trigger activated) Processing triggers for initramfs-tools ... update-initramfs: Generating /boot/initrd.img-3.8.0-25-generic cp: cannot stat ‘/module-files.d/libpango1.0-0.modules’: No such file or directory cp: cannot stat ‘/modules/pango-basic-fc.so’: No such file or directory E: /usr/share/initramfs-tools/hooks/plymouth failed with return 1. update-initramfs: failed for /boot/initrd.img-3.8.0-25-generic with 1. dpkg: error processing initramfs-tools (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Errors were encountered while processing: initramfs-tools E: Sub-process /usr/bin/dpkg returned an error code (1) why?

    Read the article

  • A tip: Updating Data in SharePoint 2010 using REST API

    - by Sahil Malik
    SharePoint 2010 Training: more information Here is a little tip that will save you hours of head scratching. See there are two ways to update data in SharePoint using REST based API. A PUT request is used to update an entire entity. If no values are specified for fields in the entity, the fields will be set to default values. A MERGE request is used to update only those field values that have changed. Any fields that are not specified by the operation will remain set to their current value.   Now, sit back and think about it. You are going to update the entire entity! Hmm. Which means, you need to a) specify every column value, and b) ensure that the read only values match what was supplied to you. What a pain in the donkey! So 99/100 times, a PUT request will give you a HTTP 500 internal server error occurred, which is just so helpful. Read full article ....

    Read the article

  • Need Assistance on ASUS K55V USB ports not working

    - by Pascal Schilling
    I have a K55V ASUS laptop, W7 Home (64bit). Whenever I try to use a mouse (or any other USB device), it doesn't work because of a driver problem. The automatic update tries to update the driver and requires me to reboot directly after. It doesn't work. Windows update then wants to re-attempt the update and then I have to reboot (etc). Now, after 10 tries it still doesn't work. Can some one help me out on this? No USB ports work at the moment, it has 2 USB 3.0 ports and 1 USB 2.0 port.

    Read the article

  • Entity communication: Message queue vs Publish/Subscribe vs Signal/Slots

    - by deft_code
    How do game engine entities communicate? Two use cases: How would entity_A send a take-damage message to entity_B? How would entity_A query entity_B's HP? Here's what I've encountered so far: Message queue entity_A creates a take-damage message and posts it to entity_B's message queue. entity_A creates a query-hp message and posts it to entity_B. entity_B in return creates an response-hp message and posts it to entity_A. Publish/Subscribe entity_B subscribes to take-damage messages (possibly with some preemptive filtering so only relevant message are delivered). entity_A produces take-damage message that references entity_B. entity_A subscribes to update-hp messages (possibly filtered). Every frame entity_B broadcasts update-hp messages. Signal/Slots ??? entity_A connects an update-hp slot to entity_B's update-hp signal. Something better? Do I have a correct understanding of how these communication schemes would tie into a game engine's entity system? How do entities in commercial game engines communicate?

    Read the article

  • Upgrade of Ubuntu 8.10 distribution fails due to missing packages

    - by Tim
    I have a server that I've forgotten to upgrade for ages, which is still running Intrepid (8.10). I'd like to upgrade it to a newer version of the distribution, so that I can get security patches etc. I found some instructions that tell me to install the package update-manager-core. I tried the following: $ sudo apt-get install update-manager-core but this fails since some of the necessary packages can't be found: ... Err http://archive.ubuntu.com intrepid/main python-apt 0.7.7.1ubuntu4 404 Not Found [IP: 91.189.88.40 80] Err http://archive.ubuntu.com intrepid-updates/main update-manager-core 1:0.93.34 404 Not Found [IP: 91.189.88.40 80] Failed to fetch http://archive.ubuntu.com/ubuntu/pool/main/p/python-apt/python-apt_0.7.7.1ubuntu4_amd64.deb 404 Not Found [IP: 91.189.88.40 80] Failed to fetch http://archive.ubuntu.com/ubuntu/pool/main/u/update-manager/update-manager-core_0.93.34_amd64.deb 404 Not Found [IP: 91.189.88.40 80] ... I know that Intrepid is no longer supported, and so I guess some of the necessary files may no longer be maintained. But this seems rather unhelpful: I can't upgrade because it's too old, and the only way to fix this would be to upgrade it. Is there a way round this? Is something else wrong?

    Read the article

  • Updating Python on Ubuntu system

    - by efficiencyIsBliss
    I want to update the Python build on my Linux box, but the only way I know how to do it is uninstalling the current version and installing the new one. My system is already up to date (I updated yesterday). I wanted to know if there is a way to update a specific program from the command line, like sudo apt-get update <program-name>. I know this command doesn't exist, but I'm hoping something equivalent does.

    Read the article

  • System always halt

    - by user211964
    Good day, Thanks Bruno for the prompt response. First sorry for my bad writing. I'll try to clarify my problem. Now my system already update to version 13.10. The problem is my system always put on stop whenever there is no activity. Example: I open terminal then execute "sudo apt-get update" then I leave my laptop away, the update stop at 20%. After I move my mouse then the update continue. When I watch a movie using vlc. Play the movie and change to full screen. After a while like 30-40 seconds the movie pause, and again after I move my mouse or hit any button on my keyboard the movie continue to play. I downloading torrent file, a big file, I leave my laptop the whole night, the next morning the downloading just stop. the problem my laptop cannot be on idle. means when I downloading a file or updating my system I just cannot leave my laptop away. I have to kept my laptop busy like surfing the internet...playing games etc.

    Read the article

  • How to reset a List c# and XNA [on hold]

    - by P3erfect
    I need to do a "retry" option when the player finishes the game.For doing this I thought to reset the lists of Monsters and other objects that moved at the first playing or which have been "killed".for example I have a list like that: //the enemy1 class is already done // in Game1 I declare it List<enemy1> enem1 = new List<enemy1>(); //Initialize method List<enemy1> enem1 = new List<enemy1>(); //LoadContent foreach (enemy1 enemy in enem1) { enemy.Load(Content); } enem1.Add(new enemy1(Content.Load<Texture2D>("enemy"), new Vector2(5900, 12600))); //Update foreach (enemy1 enemy in enem1) { enemy.Update(gameTime); } //after being shoted the enemies disappear and i remove them //if the monsters are shoted the bool "visible" goes from false to true for (int i = enem1.Count - 1; i >= 0; --i) { if (enem1[i].visible == true) enem1.RemoveAt(i); } //Draw foreach (enemy1 enemy in enem1) { if(enemy.visble==false) { enemy.Draw(spriteBatch, gameTime); } } //So my problem is to restart the game. I did this in Update method if(lost==false) { //update all the things... } if(lost==true)//this is if I die { //here I have to put the code that restore the list //I tried: foreach (enemy1 enemy in enem1) { enemy.visible=false; } player.life=3;//initializing the player,points,time player.position=initialPosition; points=0; time=0; }//the player works.. } } they should be drawn again but if I removed them they won't be drawn anymore.If I don't remove them ,instead, the enemies are in different places (because they follow me). Any suggestions to restore or reinitialize the list??

    Read the article

  • after upgrade from 10.04 to 12.04 cannot boot with linux 3.2.0-24-generic-pae

    - by Ricardo
    After upgrade from 10.04 to 12.04, I cannot boot with linux 3.2.0-24-generic-pae: process gets frozen in a xubuntu initialization screen (I had qimo installed). If I try the recovery mode (with the same Linux version), booting freezes after this message: Begin: Mounting root file system. If in grub menu I choose Previous Linux versions, I can boot using Linux 2.6.32-41-generic-pae. But once logged in, some things don't seem to work (apt-get update fails, update manager fails, HID menu does not provide suggestions...) (to be honest, I have no idea whether this is part of the bigger issue) Reading in Ask Ubuntu through apparently similar problems, I decided to follow some advices: got boot-repair and run it. The problem remains & I got this report. I also run as root in terminal $ sudo update-initramfs -u and this is what I got: update-initramfs: Generating /boot/initrd.img-3.2.0-24-generic-pae cryptsetup: WARNING: failed to detect canonical device of /dev/sda1 cryptsetup: WARNING: could not determine root device from /etc/fstab /tmp/mkinitramfs_EIDlHy/scripts/classmate-bottom/45xconfig: 9: .: Can't open /scripts/casper-functions What else? My pc is Intel® Core™ i7 CPU 870 @ 2.93GHz × 8, graphs is GeForce 8400 GS/PCIe/SSE2, memory is 7,8 GiB. I have two questions: Is this a bug in the newest kernel I should report? Is there anything I can do appart from a fresh install?

    Read the article

  • Open Office: How to disable image link updates

    - by Max Kielland
    I'm writing a user manual to a card game and there is a looot of linked images. Open Office is working so slow because every time I flip to a page with linked images it starts to update them. Is it possible to tell Open Office to NOT update the links until I tell it to do so? I would like it to display the same snapshot it showed the last time I initiated link update. I'm using Open Office v3.3.0 // Thank you.

    Read the article

  • Windows 8 Stuck on Start Screen File Search

    - by baturayd
    I have been searching someone else having the same issue but I couldn't find any. Here is my problem: I'm using Windows 8 Pro with Media Center. File search screen does not close itself after I make a file search within start screen. It stuck on that screen. I can't go back to desktop, therefore, task manager is inaccessible. Only way to get out of it is to sign out. It also looks like non-operational. It doesn't give me any result at all. It's just a blank screen with "Files" title. It used to work perfectly. Things I have done before coming here: Safe mode minimal boot to ensure no other 3rd party software interferes. sfc haven't found any inconsistencies. Search index has been rebuild. Normal boot with all 3rd party services and start-up items disabled. System restore to a date that I know this feature was functional. And by the way, I have installed all updates released so far. I strongly used file search via start menu back in Windows 7. This is an absolute game changer for me. I'm curious what causes this. I'll do a "system refresh" if I can't fix this. I'm working on it, I'll keep this thread updated should I find any fix. First update: I just discovered that file search screen gets stuck if it's invoked by typing query directly in start screen. It doesn't get stuck if it's invoked from win + x menu. I was able to "escape" from stuck screen with invoking it again by win + x menu. After rebuilding search index again, search suggestions started to appear again but still there is no file search functionality. Second update: "Results for" expression appears only for a second near to "Files" title when search is initialized. Third update: As a last resort, I finally tried to do a "System Refresh" which has also failed to refresh by giving an error after waiting almost 20 minutes at 99%. (Seriously?) After cold boot it began to undo changes. After changes were reverted, I boot the machine without doing anything further and bingo! Search began acting normal again. This is a totally weird solution. It obviously fixed certain system files during failed refresh, and kept those changes untouched because they were supposed to be that way at the first place. I'll keep this thread alive should anyone comes with a more logical explanation. Forth update: After a windows update, search functionality again stopped responding. It happened after a system update for the first time. Now I have a pretty good suspicion about system update, though I have no solid proof of its involvement with this problem.

    Read the article

  • Internet Explorer 9 beta installation problem

    - by Eric
    I'm running Vista SP2, x64. Because I wanted to test out the IE9 beta, I downloaded the English-language installer for 64-bit Vista systems. Running the installer is fine until it starts downloading required updates. The progress bar doesn't get far before it completely stops moving. Then after 20 minutes to an hour, it will tell me that there's an update I have to install, but as soon as I click OK it sends me an error message, telling me that it can't go to the url of the update which is here. So I manually enter it into my browser, which prompts me to download a standalone update. After that's been downloaded and I run it, it tells me that the update does not apply to my system. I'd appreciate any help to solving my problem.

    Read the article

  • Svn hook on commit makes lock

    - by dynback.com
    My post-commit hook looks like this: pushd C:\Websites\Project svn update I am updating my server copy of repository. When I commit client stopped on sending content and locked or I dont know. Its is waiting for something. So when I cancel and try to update manually on server, I see: Working copy "." lockedsvn And only after manual cleanup and update again, I get updated revision, that was really commited. What I do wrong?

    Read the article

  • invalid context 0x0 under iOS 7.0 and system degradation

    - by Alex
    I've read as many search results I could find on this dreaded problem, unfortunatelly, each one seems to focus on a specific function call. My problem is that I get the same error from multiple functions, which I am guessing are being called back from functions that I use. To make matters worse, the actual code is within a custom private framework which is being imported in another project, and as such, debugging isn't as simple? Can anyone point me to the right direction? I have a feeling I'm calling certain methods wrongly or with bad context, but the output from xcode is not very helpful at this point. : CGContextSetFillColorWithColor: invalid context 0x0. This is a serious error. This application, or a library it uses, is using an invalid context and is thereby contributing to an overall degradation of system stability and reliability. This notice is a courtesy: please fix this problem. It will become a fatal error in an upcoming update. : CGContextSetStrokeColorWithColor: invalid context 0x0. This is a serious error. This application, or a library it uses, is using an invalid context and is thereby contributing to an overall degradation of system stability and reliability. This notice is a courtesy: please fix this problem. It will become a fatal error in an upcoming update. CGContextSaveGState: invalid context 0x0. This is a serious error. This application, or a library it uses, is using an invalid context and is thereby contributing to an overall degradation of system stability and reliability. This notice is a courtesy: please fix this problem. It will become a fatal error in an upcoming update. : CGContextSetFlatness: invalid context 0x0. This is a serious error. This application, or a library it uses, is using an invalid context and is thereby contributing to an overall degradation of system stability and reliability. This notice is a courtesy: please fix this problem. It will become a fatal error in an upcoming update. : CGContextAddPath: invalid context 0x0. This is a serious error. This application, or a library it uses, is using an invalid context and is thereby contributing to an overall degradation of system stability and reliability. This notice is a courtesy: please fix this problem. It will become a fatal error in an upcoming update. : CGContextDrawPath: invalid context 0x0. This is a serious error. This application, or a library it uses, is using an invalid context and is thereby contributing to an overall degradation of system stability and reliability. This notice is a courtesy: please fix this problem. It will become a fatal error in an upcoming update. : CGContextRestoreGState: invalid context 0x0. This is a serious error. This application, or a library it uses, is using an invalid context and is thereby contributing to an overall degradation of system stability and reliability. This notice is a courtesy: please fix this problem. It will become a fatal error in an upcoming update. : CGContextGetBlendMode: invalid context 0x0. This is a serious error. This application, or a library it uses, is using an invalid context and is thereby contributing to an overall degradation of system stability and reliability. This notice is a courtesy: please fix this problem. It will become a fatal error in an upcoming update. Those errors may occur when a custom view is presented, or one of its inherited classes. At which point they spawn multiple times, until the keyboard won't provide any input. Touch events are still registered, but system slows down, and eventually may lead to unallocated object errors.

    Read the article

  • mysql connect error issue

    - by Alex
    I've php page which update Mysql Db. I don't understand why my following php code is saying that "Could not update marker. No database selected". Strange!! can you please tell me why it's showing error message. Thanks. Php code: <?php // database settings $db_username = 'root'; $db_password = ''; $db_name = 'parkool'; $db_host = 'localhost'; //mysqli $mysqli = new mysqli($db_host, $db_username, $db_password, $db_name); if (mysqli_connect_errno()) { header('HTTP/1.1 500 Error: Could not connect to db!'); exit(); } ################ Save & delete markers ################# if($_POST) //run only if there's a post data { //make sure request is comming from Ajax $xhr = $_SERVER['HTTP_X_REQUESTED_WITH'] == 'XMLHttpRequest'; if (!$xhr){ header('HTTP/1.1 500 Error: Request must come from Ajax!'); exit(); } // get marker position and split it for database $mLatLang = explode(',',$_POST["latlang"]); $mLat = filter_var($mLatLang[0], FILTER_VALIDATE_FLOAT); $mLng = filter_var($mLatLang[1], FILTER_VALIDATE_FLOAT); $mName = filter_var($_POST["name"], FILTER_SANITIZE_STRING); $mAddress = filter_var($_POST["address"], FILTER_SANITIZE_STRING); $mId = filter_var($_POST["id"], FILTER_SANITIZE_STRING); /*$result = mysql_query("SELECT id FROM test.markers WHERE test.markers.lat=$mLat AND test.markers.lng=$mLng"); if (!$result) { echo 'Could not run query: ' . mysql_error(); exit; } $row = mysql_fetch_row($result); $id=$row[0];*/ //$output = '<h1 class="marker-heading">'.$mId.'</h1><p>'.$mAddress.'</p>'; //exit($output); //Update Marker if(isset($_POST["update"]) && $_POST["update"]==true) { $results = mysql_query("UPDATE parkings SET latitude = '$mLat', longitude = '$mLng' WHERE locId = '94' "); if (!$results) { //header('HTTP/1.1 500 Error: Could not Update Markers! $mId'); echo "coudld not update marker." . mysql_error(); exit(); } exit("Done!"); } $output = '<h1 class="marker-heading">'.$mName.'</h1><p>'.$mAddress.'</p>'; exit($output); } ############### Continue generating Map XML ################# //Create a new DOMDocument object $dom = new DOMDocument("1.0"); $node = $dom->createElement("markers"); //Create new element node $parnode = $dom->appendChild($node); //make the node show up // Select all the rows in the markers table $results = $mysqli->query("SELECT * FROM parkings WHERE 1"); if (!$results) { header('HTTP/1.1 500 Error: Could not get markers!'); exit(); } //set document header to text/xml header("Content-type: text/xml"); // Iterate through the rows, adding XML nodes for each while($obj = $results->fetch_object()) { $node = $dom->createElement("marker"); $newnode = $parnode->appendChild($node); $newnode->setAttribute("name",$obj->name); $newnode->setAttribute("locId",$obj->locId); $newnode->setAttribute("address", $obj->address); $newnode->setAttribute("latitude", $obj->latitude); $newnode->setAttribute("longitude", $obj->longitude); //$newnode->setAttribute("type", $obj->type); } echo $dom->saveXML();

    Read the article

  • Of these 3 methods for reading linked lists from shared memory, why is the 3rd fastest?

    - by Joseph Garvin
    I have a 'server' program that updates many linked lists in shared memory in response to external events. I want client programs to notice an update on any of the lists as quickly as possible (lowest latency). The server marks a linked list's node's state_ as FILLED once its data is filled in and its next pointer has been set to a valid location. Until then, its state_ is NOT_FILLED_YET. I am using memory barriers to make sure that clients don't see the state_ as FILLED before the data within is actually ready (and it seems to work, I never see corrupt data). Also, state_ is volatile to be sure the compiler doesn't lift the client's checking of it out of loops. Keeping the server code exactly the same, I've come up with 3 different methods for the client to scan the linked lists for changes. The question is: Why is the 3rd method fastest? Method 1: Round robin over all the linked lists (called 'channels') continuously, looking to see if any nodes have changed to 'FILLED': void method_one() { std::vector<Data*> channel_cursors; for(ChannelList::iterator i = channel_list.begin(); i != channel_list.end(); ++i) { Data* current_item = static_cast<Data*>(i->get(segment)->tail_.get(segment)); channel_cursors.push_back(current_item); } while(true) { for(std::size_t i = 0; i < channel_list.size(); ++i) { Data* current_item = channel_cursors[i]; ACQUIRE_MEMORY_BARRIER; if(current_item->state_ == NOT_FILLED_YET) { continue; } log_latency(current_item->tv_sec_, current_item->tv_usec_); channel_cursors[i] = static_cast<Data*>(current_item->next_.get(segment)); } } } Method 1 gave very low latency when then number of channels was small. But when the number of channels grew (250K+) it became very slow because of looping over all the channels. So I tried... Method 2: Give each linked list an ID. Keep a separate 'update list' to the side. Every time one of the linked lists is updated, push its ID on to the update list. Now we just need to monitor the single update list, and check the IDs we get from it. void method_two() { std::vector<Data*> channel_cursors; for(ChannelList::iterator i = channel_list.begin(); i != channel_list.end(); ++i) { Data* current_item = static_cast<Data*>(i->get(segment)->tail_.get(segment)); channel_cursors.push_back(current_item); } UpdateID* update_cursor = static_cast<UpdateID*>(update_channel.tail_.get(segment)); while(true) { if(update_cursor->state_ == NOT_FILLED_YET) { continue; } ::uint32_t update_id = update_cursor->list_id_; Data* current_item = channel_cursors[update_id]; if(current_item->state_ == NOT_FILLED_YET) { std::cerr << "This should never print." << std::endl; // it doesn't continue; } log_latency(current_item->tv_sec_, current_item->tv_usec_); channel_cursors[update_id] = static_cast<Data*>(current_item->next_.get(segment)); update_cursor = static_cast<UpdateID*>(update_cursor->next_.get(segment)); } } Method 2 gave TERRIBLE latency. Whereas Method 1 might give under 10us latency, Method 2 would inexplicably often given 8ms latency! Using gettimeofday it appears that the change in update_cursor-state_ was very slow to propogate from the server's view to the client's (I'm on a multicore box, so I assume the delay is due to cache). So I tried a hybrid approach... Method 3: Keep the update list. But loop over all the channels continuously, and within each iteration check if the update list has updated. If it has, go with the number pushed onto it. If it hasn't, check the channel we've currently iterated to. void method_three() { std::vector<Data*> channel_cursors; for(ChannelList::iterator i = channel_list.begin(); i != channel_list.end(); ++i) { Data* current_item = static_cast<Data*>(i->get(segment)->tail_.get(segment)); channel_cursors.push_back(current_item); } UpdateID* update_cursor = static_cast<UpdateID*>(update_channel.tail_.get(segment)); while(true) { for(std::size_t i = 0; i < channel_list.size(); ++i) { std::size_t idx = i; ACQUIRE_MEMORY_BARRIER; if(update_cursor->state_ != NOT_FILLED_YET) { //std::cerr << "Found via update" << std::endl; i--; idx = update_cursor->list_id_; update_cursor = static_cast<UpdateID*>(update_cursor->next_.get(segment)); } Data* current_item = channel_cursors[idx]; ACQUIRE_MEMORY_BARRIER; if(current_item->state_ == NOT_FILLED_YET) { continue; } found_an_update = true; log_latency(current_item->tv_sec_, current_item->tv_usec_); channel_cursors[idx] = static_cast<Data*>(current_item->next_.get(segment)); } } } The latency of this method was as good as Method 1, but scaled to large numbers of channels. The problem is, I have no clue why. Just to throw a wrench in things: if I uncomment the 'found via update' part, it prints between EVERY LATENCY LOG MESSAGE. Which means things are only ever found on the update list! So I don't understand how this method can be faster than method 2. The full, compilable code (requires GCC and boost-1.41) that generates random strings as test data is at: http://pastebin.com/e3HuL0nr

    Read the article

  • What do these .NET auto-generated table adapter commands do? e.g. UPDATE/INSERT followed by a SELECT

    - by RickL
    I'm working with a legacy application which I'm trying to change so that it can work with SQL CE, whilst it was originally written against SQL Server. The problem I am getting now is that when I try to do dataAdapter.Update, SQL CE complains that it is not expecting the SELECT keyword in the command text. I believe this is because SQL CE does not support batch SELECT statements. The auto-generated table adapter command looks like this... this._adapter.InsertCommand.CommandText = @"INSERT INTO [Table] ([Field1], [Field2]) VALUES (@Value1, @Value2); SELECT Field1, Field2 FROM Table WHERE (Field1 = @Value1)"; What is it doing? It looks like it is inserting new records from the datatable into the database, and then reading that record back from the database into the datatable? What's the point of that? Can I just go through the code and remove all these SELECT statements? Or is there an easier way to solve my problem of wanting to use these data adapters with SQL CE? I cannot regenerate these table adapters, as the people who knew how to have long since left.

    Read the article

  • What do these C# auto-generated table adapter commands do? e.g. UPDATE/INSERT followed by a SELECT

    - by RickL
    I'm working with a legacy application which I'm trying to change so that it can work with SQL CE, whilst it was originally written against SQL Server. The problem I am getting now is that when I try to do dataAdapter.Update, SQL CE complains that it is not expecting the SELECT keyword in the command text. I believe this is because SQL CE does not support batch SELECT statements. The auto-generated table adapter command looks like this... this._adapter.InsertCommand.CommandText = @"INSERT INTO [Table] ([Field1], [Field2]) VALUES (@Value1, @Value2); SELECT Field1, Field2 FROM Table WHERE (Field1 = @Value1)"; What is it doing? It looks like it is inserting new records from the datatable into the database, and then reading that record back from the database into the datatable? What's the point of that? Can I just go through the code and remove all these SELECT statements? Or is there an easier way to solve my problem of wanting to use these data adapters with SQL CE? I cannot regenerate these table adapters, as the people who knew how to have long since left.

    Read the article

  • Updating a SharePoint master page via a solution (WSP)

    - by Kelly Jones
    In my last blog post, I wrote how to deploy a SharePoint theme using Features and a solution package.  As promised in that post, here is how to update an already deployed master page. There are several ways to update a master page in SharePoint.  You could upload a new version to the master page gallery, or you could upload a new master page to the gallery, and then set the site to use this new page.  Manually uploading your master page to the master page gallery might be the best option, depending on your environment.  For my client, I did these steps in code, which is what they preferred. (Image courtesy of: http://www.joiningdots.net/blog/2007/08/sharepoint-and-quick-launch.html ) Before you decide which method you need to use, take a look at your existing pages.  Are they using the SharePoint dynamic token or the static token for the master page reference?  The wha, huh? SO, there are four ways to tell an .aspx page hosted in SharePoint which master page it should use: “~masterurl/default.master” – tells the page to use the default.master property of the site “~masterurl/custom.master” – tells the page to use the custom.master property of the site “~site/default.master” – tells the page to use the file named “default.master” in the site’s master page gallery “~sitecollection/default.master” – tells the page to use the file named “default.master” in the site collection’s master page gallery For more information about these tokens, take a look at this article on MSDN. Once you determine which token your existing pages are pointed to, then you know which file you need to update.  So, if the ~masterurl tokens are used, then you upload a new master page, either replacing the existing one or adding another one to the gallery.  If you’ve uploaded a new file with a new name, you’ll just need to set it as the master page either through the UI (MOSS only) or through code (MOSS or WSS Feature receiver code – or using SharePoint Designer). If the ~site or ~sitecollection tokens were used, then you’re limited to either replacing the existing master page, or editing all of your existing pages to point to another master page.  In most cases, it probably makes sense to just replace the master page. For my project, I’m working with WSS and the existing pages are set to the ~sitecollection token.  Based on this, I decided to just upload a new version of the existing master page (and not modify the dozens of existing pages). Also, since my client prefers Features and solutions, I created a master page Feature and a corresponding Feature Receiver.  For information on creating the elements and feature files, check out this post: http://sharepointmagazine.net/technical/development/deploying-the-master-page . This works fine, unless you are overwriting an existing master page, which was my case.  You’ll run into errors because the master page file needs to be checked out, replaced, and then checked in.  I wrote code in my Feature Activated event handler to accomplish these steps. Here are the steps necessary in code: Get the file name from the elements file of the Feature Check out the file from the master page gallery Upload the file to the master page gallery Check in the file to the master page gallery Here’s the code in my Feature Receiver: 1: public override void FeatureActivated(SPFeatureReceiverProperties properties) 2: { 3: try 4: { 5:   6: SPElementDefinitionCollection col = properties.Definition.GetElementDefinitions(System.Globalization.CultureInfo.CurrentCulture); 7:   8: using (SPWeb curweb = GetCurWeb(properties)) 9: { 10: foreach (SPElementDefinition ele in col) 11: { 12: if (string.Compare(ele.ElementType, "Module", true) == 0) 13: { 14: // <Module Name="DefaultMasterPage" List="116" Url="_catalogs/masterpage" RootWebOnly="FALSE"> 15: // <File Url="myMaster.master" Type="GhostableInLibrary" IgnoreIfAlreadyExists="TRUE" 16: // Path="MasterPages/myMaster.master" /> 17: // </Module> 18: string Url = ele.XmlDefinition.Attributes["Url"].Value; 19: foreach (System.Xml.XmlNode file in ele.XmlDefinition.ChildNodes) 20: { 21: string Url2 = file.Attributes["Url"].Value; 22: string Path = file.Attributes["Path"].Value; 23: string fileType = file.Attributes["Type"].Value; 24:   25: if (string.Compare(fileType, "GhostableInLibrary", true) == 0) 26: { 27: //Check out file in document library 28: SPFile existingFile = curweb.GetFile(Url + "/" + Url2); 29:   30: if (existingFile != null) 31: { 32: if (existingFile.CheckOutStatus != SPFile.SPCheckOutStatus.None) 33: { 34: throw new Exception("The master page file is already checked out. Please make sure the master page file is checked in, before activating this feature."); 35: } 36: else 37: { 38: existingFile.CheckOut(); 39: existingFile.Update(); 40: } 41: } 42:   43: //Upload file to document library 44: string filePath = System.IO.Path.Combine(properties.Definition.RootDirectory, Path); 45: string fileName = System.IO.Path.GetFileName(filePath); 46: char slash = Convert.ToChar("/"); 47: string[] folders = existingFile.ParentFolder.Url.Split(slash); 48:   49: if (folders.Length > 2) 50: { 51: Logger.logMessage("More than two folders were detected in the library path for the master page. Only two are supported.", 52: Logger.LogEntryType.Information); //custom logging component 53: } 54:   55: SPFolder myLibrary = curweb.Folders[folders[0]].SubFolders[folders[1]]; 56:   57: FileStream fs = File.OpenRead(filePath); 58:   59: SPFile newFile = myLibrary.Files.Add(fileName, fs, true); 60:   61: myLibrary.Update(); 62: newFile.CheckIn("Updated by Feature", SPCheckinType.MajorCheckIn); 63: newFile.Update(); 64: } 65: } 66: } 67: } 68: } 69: } 70: catch (Exception ex) 71: { 72: string msg = "Error occurred during feature activation"; 73: Logger.logException(ex, msg, ""); 74: } 75:   76: } 77:   78: /// <summary> 79: /// Using a Feature's properties, get a reference to the Current Web 80: /// </summary> 81: /// <param name="properties"></param> 82: public SPWeb GetCurWeb(SPFeatureReceiverProperties properties) 83: { 84: SPWeb curweb; 85:   86: //Check if the parent of the web is a site or a web 87: if (properties != null && properties.Feature.Parent.GetType().ToString() == "Microsoft.SharePoint.SPWeb") 88: { 89:   90: //Get web from parent 91: curweb = (SPWeb)properties.Feature.Parent; 92: 93: } 94: else 95: { 96: //Get web from Site 97: using (SPSite cursite = (SPSite)properties.Feature.Parent) 98: { 99: curweb = (SPWeb)cursite.OpenWeb(); 100: } 101: } 102:   103: return curweb; 104: } This did the trick.  It allowed me to update my existing master page, through an easily repeatable process (which is great when you are working with more than one environment and what to do things like TEST it!).  I did run into what I would classify as a strange issue with one of my subsites, but that’s the topic for another blog post.

    Read the article

  • Benchmarking MySQL Replication with Multi-Threaded Slaves

    - by Mat Keep
    0 0 1 1145 6530 Homework 54 15 7660 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-US;} The objective of this benchmark is to measure the performance improvement achieved when enabling the Multi-Threaded Slave enhancement delivered as a part MySQL 5.6. As the results demonstrate, Multi-Threaded Slaves delivers 5x higher replication performance based on a configuration with 10 databases/schemas. For real-world deployments, higher replication performance directly translates to: · Improved consistency of reads from slaves (i.e. reduced risk of reading "stale" data) · Reduced risk of data loss should the master fail before replicating all events in its binary log (binlog) The multi-threaded slave splits processing between worker threads based on schema, allowing updates to be applied in parallel, rather than sequentially. This delivers benefits to those workloads that isolate application data using databases - e.g. multi-tenant systems deployed in cloud environments. Multi-Threaded Slaves are just one of many enhancements to replication previewed as part of the MySQL 5.6 Development Release, which include: · Global Transaction Identifiers coupled with MySQL utilities for automatic failover / switchover and slave promotion · Crash Safe Slaves and Binlog · Optimized Row Based Replication · Replication Event Checksums · Time Delayed Replication These and many more are discussed in the “MySQL 5.6 Replication: Enabling the Next Generation of Web & Cloud Services” Developer Zone article  Back to the benchmark - details are as follows. Environment The test environment consisted of two Linux servers: · one running the replication master · one running the replication slave. Only the slave was involved in the actual measurements, and was based on the following configuration: - Hardware: Oracle Sun Fire X4170 M2 Server - CPU: 2 sockets, 6 cores with hyper-threading, 2930 MHz. - OS: 64-bit Oracle Enterprise Linux 6.1 - Memory: 48 GB Test Procedure Initial Setup: Two MySQL servers were started on two different hosts, configured as replication master and slave. 10 sysbench schemas were created, each with a single table: CREATE TABLE `sbtest` (    `id` int(10) unsigned NOT NULL AUTO_INCREMENT,    `k` int(10) unsigned NOT NULL DEFAULT '0',    `c` char(120) NOT NULL DEFAULT '',    `pad` char(60) NOT NULL DEFAULT '',    PRIMARY KEY (`id`),    KEY `k` (`k`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 10,000 rows were inserted in each of the 10 tables, for a total of 100,000 rows. When the inserts had replicated to the slave, the slave threads were stopped. The slave data directory was copied to a backup location and the slave threads position in the master binlog noted. 10 sysbench clients, each configured with 10 threads, were spawned at the same time to generate a random schema load against each of the 10 schemas on the master. Each sysbench client executed 10,000 "update key" statements: UPDATE sbtest set k=k+1 WHERE id = <random row> In total, this generated 100,000 update statements to later replicate during the test itself. Test Methodology: The number of slave workers to test with was configured using: SET GLOBAL slave_parallel_workers=<workers> Then the slave IO thread was started and the test waited for all the update queries to be copied over to the relay log on the slave. The benchmark clock was started and then the slave SQL thread was started. The test waited for the slave SQL thread to finish executing the 100k update queries, doing "select master_pos_wait()". When master_pos_wait() returned, the benchmark clock was stopped and the duration calculated. The calculated duration from the benchmark clock should be close to the time it took for the SQL thread to execute the 100,000 update queries. The 100k queries divided by this duration gave the benchmark metric, reported as Queries Per Second (QPS). Test Reset: The test-reset cycle was implemented as follows: · the slave was stopped · the slave data directory replaced with the previous backup · the slave restarted with the slave threads replication pointer repositioned to the point before the update queries in the binlog. The test could then be repeated with identical set of queries but a different number of slave worker threads, enabling a fair comparison. The Test-Reset cycle was repeated 3 times for 0-24 number of workers and the QPS metric calculated and averaged for each worker count. MySQL Configuration The relevant configuration settings used for MySQL are as follows: binlog-format=STATEMENT relay-log-info-repository=TABLE master-info-repository=TABLE As described in the test procedure, the slave_parallel_workers setting was modified as part of the test logic. The consequence of changing this setting is: 0 worker threads:    - current (i.e. single threaded) sequential mode    - 1 x IO thread and 1 x SQL thread    - SQL thread both reads and executes the events 1 worker thread:    - sequential mode    - 1 x IO thread, 1 x Coordinator SQL thread and 1 x Worker thread    - coordinator reads the event and hands it to the worker who executes 2+ worker threads:    - parallel execution    - 1 x IO thread, 1 x Coordinator SQL thread and 2+ Worker threads    - coordinator reads events and hands them to the workers who execute them Results Figure 1 below shows that Multi-Threaded Slaves deliver ~5x higher replication performance when configured with 10 worker threads, with the load evenly distributed across our 10 x schemas. This result is compared to the current replication implementation which is based on a single SQL thread only (i.e. zero worker threads). Figure 1: 5x Higher Performance with Multi-Threaded Slaves The following figure shows more detailed results, with QPS sampled and reported as the worker threads are incremented. The raw numbers behind this graph are reported in the Appendix section of this post. Figure 2: Detailed Results As the results above show, the configuration does not scale noticably from 5 to 9 worker threads. When configured with 10 worker threads however, scalability increases significantly. The conclusion therefore is that it is desirable to configure the same number of worker threads as schemas. Other conclusions from the results: · Running with 1 worker compared to zero workers just introduces overhead without the benefit of parallel execution. · As expected, having more workers than schemas adds no visible benefit. Aside from what is shown in the results above, testing also demonstrated that the following settings had a very positive effect on slave performance: relay-log-info-repository=TABLE master-info-repository=TABLE For 5+ workers, it was up to 2.3 times as fast to run with TABLE compared to FILE. Conclusion As the results demonstrate, Multi-Threaded Slaves deliver significant performance increases to MySQL replication when handling multiple schemas. This, and the other replication enhancements introduced in MySQL 5.6 are fully available for you to download and evaluate now from the MySQL Developer site (select Development Release tab). You can learn more about MySQL 5.6 from the documentation  Please don’t hesitate to comment on this or other replication blogs with feedback and questions. Appendix – Detailed Results

    Read the article

  • Advantage Database Server: slow stored procedure performance.

    - by ie
    I have a question about a performance of stored procedures in the ADS. I created a simple database with the following structure: CREATE TABLE MainTable ( Id INTEGER PRIMARY KEY, Name VARCHAR(50), Value INTEGER ); CREATE UNIQUE INDEX MainTableName_UIX ON MainTable ( Name ); CREATE TABLE SubTable ( Id INTEGER PRIMARY KEY, MainId INTEGER, Name VARCHAR(50), Value INTEGER ); CREATE INDEX SubTableMainId_UIX ON SubTable ( MainId ); CREATE UNIQUE INDEX SubTableName_UIX ON SubTable ( Name ); CREATE PROCEDURE CreateItems ( MainName VARCHAR ( 20 ), SubName VARCHAR ( 20 ), MainValue INTEGER, SubValue INTEGER, MainId INTEGER OUTPUT, SubId INTEGER OUTPUT ) BEGIN DECLARE @MainName VARCHAR ( 20 ); DECLARE @SubName VARCHAR ( 20 ); DECLARE @MainValue INTEGER; DECLARE @SubValue INTEGER; DECLARE @MainId INTEGER; DECLARE @SubId INTEGER; @MainName = (SELECT MainName FROM __input); @SubName = (SELECT SubName FROM __input); @MainValue = (SELECT MainValue FROM __input); @SubValue = (SELECT SubValue FROM __input); @MainId = (SELECT MAX(Id)+1 FROM MainTable); @SubId = (SELECT MAX(Id)+1 FROM SubTable ); INSERT INTO MainTable (Id, Name, Value) VALUES (@MainId, @MainName, @MainValue); INSERT INTO SubTable (Id, Name, MainId, Value) VALUES (@SubId, @SubName, @MainId, @SubValue); INSERT INTO __output SELECT @MainId, @SubId FROM system.iota; END; CREATE PROCEDURE UpdateItems ( MainName VARCHAR ( 20 ), MainValue INTEGER, SubValue INTEGER ) BEGIN DECLARE @MainName VARCHAR ( 20 ); DECLARE @MainValue INTEGER; DECLARE @SubValue INTEGER; DECLARE @MainId INTEGER; @MainName = (SELECT MainName FROM __input); @MainValue = (SELECT MainValue FROM __input); @SubValue = (SELECT SubValue FROM __input); @MainId = (SELECT TOP 1 Id FROM MainTable WHERE Name = @MainName); UPDATE MainTable SET Value = @MainValue WHERE Id = @MainId; UPDATE SubTable SET Value = @SubValue WHERE MainId = @MainId; END; CREATE PROCEDURE SelectItems ( MainName VARCHAR ( 20 ), CalculatedValue INTEGER OUTPUT ) BEGIN DECLARE @MainName VARCHAR ( 20 ); @MainName = (SELECT MainName FROM __input); INSERT INTO __output SELECT m.Value * s.Value FROM MainTable m INNER JOIN SubTable s ON m.Id = s.MainId WHERE m.Name = @MainName; END; CREATE PROCEDURE DeleteItems ( MainName VARCHAR ( 20 ) ) BEGIN DECLARE @MainName VARCHAR ( 20 ); DECLARE @MainId INTEGER; @MainName = (SELECT MainName FROM __input); @MainId = (SELECT TOP 1 Id FROM MainTable WHERE Name = @MainName); DELETE FROM SubTable WHERE MainId = @MainId; DELETE FROM MainTable WHERE Id = @MainId; END; Actually, the problem I had - even so light stored procedures work very-very slow (about 50-150 ms) relatively to plain queries (0-5ms). To test the performance, I created a simple test (in F# using ADS ADO.NET provider): open System; open System.Data; open System.Diagnostics; open Advantage.Data.Provider; let mainName = "main name #"; let subName = "sub name #"; // INSERT let cmdTextScriptInsert = " DECLARE @MainId INTEGER; DECLARE @SubId INTEGER; @MainId = (SELECT MAX(Id)+1 FROM MainTable); @SubId = (SELECT MAX(Id)+1 FROM SubTable ); INSERT INTO MainTable (Id, Name, Value) VALUES (@MainId, :MainName, :MainValue); INSERT INTO SubTable (Id, Name, MainId, Value) VALUES (@SubId, :SubName, @MainId, :SubValue); SELECT @MainId, @SubId FROM system.iota;"; let cmdTextProcedureInsert = "CreateItems"; // UPDATE let cmdTextScriptUpdate = " DECLARE @MainId INTEGER; @MainId = (SELECT TOP 1 Id FROM MainTable WHERE Name = :MainName); UPDATE MainTable SET Value = :MainValue WHERE Id = @MainId; UPDATE SubTable SET Value = :SubValue WHERE MainId = @MainId;"; let cmdTextProcedureUpdate = "UpdateItems"; // SELECT let cmdTextScriptSelect = " SELECT m.Value * s.Value FROM MainTable m INNER JOIN SubTable s ON m.Id = s.MainId WHERE m.Name = :MainName;"; let cmdTextProcedureSelect = "SelectItems"; // DELETE let cmdTextScriptDelete = " DECLARE @MainId INTEGER; @MainId = (SELECT TOP 1 Id FROM MainTable WHERE Name = :MainName); DELETE FROM SubTable WHERE MainId = @MainId; DELETE FROM MainTable WHERE Id = @MainId;"; let cmdTextProcedureDelete = "DeleteItems"; let cnnStr = @"data source=D:\DB\test.add; ServerType=local; user id=adssys; password=***;"; let cnn = new AdsConnection(cnnStr); try cnn.Open(); let cmd = cnn.CreateCommand(); let parametrize ix prms = cmd.Parameters.Clear(); let addParam = function | "MainName" -> cmd.Parameters.Add(":MainName" , mainName + ix.ToString()) |> ignore; | "SubName" -> cmd.Parameters.Add(":SubName" , subName + ix.ToString() ) |> ignore; | "MainValue" -> cmd.Parameters.Add(":MainValue", ix * 3 ) |> ignore; | "SubValue" -> cmd.Parameters.Add(":SubValue" , ix * 7 ) |> ignore; | _ -> () prms |> List.iter addParam; let runTest testData = let (cmdType, cmdName, cmdText, cmdParams) = testData; let toPrefix cmdType cmdName = let prefix = match cmdType with | CommandType.StoredProcedure -> "Procedure-" | CommandType.Text -> "Script -" | _ -> "Unknown -" in prefix + cmdName; let stopWatch = new Stopwatch(); let runStep ix prms = parametrize ix prms; stopWatch.Start(); cmd.ExecuteNonQuery() |> ignore; stopWatch.Stop(); cmd.CommandText <- cmdText; cmd.CommandType <- cmdType; let startId = 1500; let count = 10; for id in startId .. startId+count do runStep id cmdParams; let elapsed = stopWatch.Elapsed; Console.WriteLine("Test '{0}' - total: {1}; per call: {2}ms", toPrefix cmdType cmdName, elapsed, Convert.ToInt32(elapsed.TotalMilliseconds)/count); let lst = [ (CommandType.Text, "Insert", cmdTextScriptInsert, ["MainName"; "SubName"; "MainValue"; "SubValue"]); (CommandType.Text, "Update", cmdTextScriptUpdate, ["MainName"; "MainValue"; "SubValue"]); (CommandType.Text, "Select", cmdTextScriptSelect, ["MainName"]); (CommandType.Text, "Delete", cmdTextScriptDelete, ["MainName"]) (CommandType.StoredProcedure, "Insert", cmdTextProcedureInsert, ["MainName"; "SubName"; "MainValue"; "SubValue"]); (CommandType.StoredProcedure, "Update", cmdTextProcedureUpdate, ["MainName"; "MainValue"; "SubValue"]); (CommandType.StoredProcedure, "Select", cmdTextProcedureSelect, ["MainName"]); (CommandType.StoredProcedure, "Delete", cmdTextProcedureDelete, ["MainName"])]; lst |> List.iter runTest; finally cnn.Close(); And I'm getting the following results: Test 'Script -Insert' - total: 00:00:00.0292841; per call: 2ms Test 'Script -Update' - total: 00:00:00.0056296; per call: 0ms Test 'Script -Select' - total: 00:00:00.0051738; per call: 0ms Test 'Script -Delete' - total: 00:00:00.0059258; per call: 0ms Test 'Procedure-Insert' - total: 00:00:01.2567146; per call: 125ms Test 'Procedure-Update' - total: 00:00:00.7442440; per call: 74ms Test 'Procedure-Select' - total: 00:00:00.5120446; per call: 51ms Test 'Procedure-Delete' - total: 00:00:01.0619165; per call: 106ms The situation with the remote server is much better, but still a great gap between plaqin queries and stored procedures: Test 'Script -Insert' - total: 00:00:00.0709299; per call: 7ms Test 'Script -Update' - total: 00:00:00.0161777; per call: 1ms Test 'Script -Select' - total: 00:00:00.0258113; per call: 2ms Test 'Script -Delete' - total: 00:00:00.0166242; per call: 1ms Test 'Procedure-Insert' - total: 00:00:00.5116138; per call: 51ms Test 'Procedure-Update' - total: 00:00:00.3802251; per call: 38ms Test 'Procedure-Select' - total: 00:00:00.1241245; per call: 12ms Test 'Procedure-Delete' - total: 00:00:00.4336334; per call: 43ms Is it any chance to improve the SP performance? Please advice. ADO.NET driver version - 9.10.2.9 Server version - 9.10.0.9 (ANSI - GERMAN, OEM - GERMAN) Thanks!

    Read the article

  • help: cannot make ubuntu 64-bit v12.04 install work

    - by honestann
    I decided it was time to update my ubuntu (single boot) computer from 64-bit v10.04 to 64-bit v12.04. Unfortunately, for some reason (or reasons) I just can't make it work. Note that I am attempting a fresh install of 64-bit v12.04 onto a new 3TB hard disk, not an upgrade of the 1TB hard disk that has contained my 64-bit v10.04 installation. To perform the attempted install of v12.04 I unplug the SATA cable from the 1TB drive and plug it into the 3TB drive (to avoid risking damage to my working v10.04 installation). I downloaded the ubuntu 64-bit v12.04 install DVD ISO file (~1.6 GB) from the ubuntu releases webpage and burned it onto a DVD. I have downloaded the DVD ISO file 3 times and burned 3 of these installation DVDs (twice with v10.04 and once with my winxp64 system), but none of them work. I run the "check disk" on the DVDs at the beginning of the installation process to assure the DVD is valid. I also tried to install on two older 250GB seagate drives in the same computer. During every attempt I plug the same SATA cable (sda) into only one disk drive (the 3TB or one of the 250GB drives) and leave the other disk drives unconnected (for simplicity). Installation takes about 30 minutes on the 250GB drives, and about 60 minutes on the 3TB drive - not sure why. When I install on the 250GB drives, the install process finishes, the computer reboots (after the install DVD is removed), but I get a grub error 15. It is my understanding that 64-bit ubuntu (and 64-bit linux in general) has no problem with 3TB disk drives. In the BIOS I have tried having EFI set to "enabled" and "auto" with no apparent difference (no success). I have tried partitioning the drive in a few ways to see if that makes a difference, but so far it has not mattered. Typically I manually create partitions something like this: 8GB swap 8GB /boot ext4 3TB / ext4 But I've also tried the following, just in case it matters: 100MB boot efi 8GB swap 8GB /boot ext4 3TB / ext4 Note: In the partition dialog I specify bootup on the same drive I am partitioning and installing ubuntu v12.04 onto. It is a VERY DANGEROUS FACT that the default for this always comes up with the wrong drive (some other drive, generally the external drive). Unless I'm stupid or misunderstanding something, this is very wrong and very dangerous default behavior. Note: If I connect the SATA cable to the 1TB drive that has been my ubuntu 64-bit v10.04 system drive for the past 2 years, it boots up and runs fine. I guess there must be a log file somewhere, and maybe it gives some hints as to what the problem is. I should be able to boot off the 1TB drive with the 3TB drive connected as a secondary (non-boot) drive and get the log file, assuming there is one and someone tells me the name (and where to find it if the name is very generic). After installation on the 3TB drive completes and the system reboots, the following prints out on a black screen: Loading Operating System ... Boot from CD/DVD : Boot from CD/DVD : error: unknown filesystem grub rescue Note: I have two DVD burners in the system, hence the duplicate line above. The same install and reboot on the 250GB drives generates "grub error 15". Sigh. Any ideas? ========== motherboard == gigabyte 990FXA-UD7 CPU == AMD FX-8150 8-core bulldozer @ 3.6 GHz RAM == 8GB of DDR3 in 2 sticks (matched pair) HDD == seagate 3TB SATA3 @ 7200 rpm (new install 64-bit v12.04) HDD == seagate 1TB SATA3 @ 7200 rpm (current install 64-bit v10.04) GPU == nvidia GTX-285 ??? == no overclocking or other funky business USB == external seagate 2TB HDD for making backups DVD == one bluray burner (SATA) DVD == one DVD burner (SATA) The current ubuntu 64-bit v10.04 system boots and runs fine on a seagate 1TB.

    Read the article

  • Non-standard installation (installing Linux from Linux)

    - by Evan Plaice
    So, here's my setup. I have one partition with the newest version installed, a second partition with an older version installed (as a backup just in case), a swap partition that both share, and a boot partition so the bootloader doesn't need to be setup after each upgrade. Partitions: sda1 ext3 /boot sda2 ext4 / (current version) sda3 ext4 / (old version) sda4 swap /swap sda5 ntfs (contains folders symbolically linked to /home on /) So far it has been a very good setup. I can create new boot loaders without screwing it up and adding my personal files into a new install is as simple as creating some symbolic links (the partition is NTFS in case I need to load windows on the system again). Here's the issue. I'd like to be able to drop the install into /distro on the current version and install a new version on / on the old version effectively replacing/upgrading it. The goal is to be able to just swap out new versions as they are released while maintaining redundancy in case I don't like th update. So far I have: downloaded the install.iso created a folder in /distro copied the install.iso into /distro extracted vmlinuz and initrd.lz into /distro Then I modified /boot/grub/menu.lst with the following entry: title Install Linux root (hd0,1) kernel /distro/vmlinuz initrd /distro/initrd.lz vmlinuz loads perfectly but it says it can't find initrd.lz on boot. I have also tried to uncompress the image with: unlzma < initrd.lz > initrd.img And, updating the menu.lst file to match; but that doesn't work either. I'm assuming that vmlinuz (linux kernel) loads, fires up the virtual filesystem by creating a ramdisk (initrd), mounts the iso, and launches the installer. Am I missing something here? Update: First, I wanted to say that the accepted answer would have been the best option if I was doing a normal Ubuntu install. Unfortunately, I was installing Linux Mint (which lacks the script needed to make debootstrap work. So the problem I with the above approach was, I was missing the command that vmlinuz (linux kernel) needed to execute to start boot into LiveCD mode. By looking in the /boot/grub/grub.cfg file I found what I was missing. Although this method will work, it requires that the installation files reside on their own partition. I took the easy route and used unetbootin to drop the LiveCD on a usb drive and booted from that. Like I said before. Debootstrap would have been the ideal solution here. Even though I couldn't use it I wrote down the steps it would've taken to use it. Step One: Format sda3 (the partition with the old copy of linux that's being overwritten) I used gparted to format it as ext4 from within the current linux install. How this is done varies based on what tools you prefer to use. Step Two: Mount the newly formatted partition (we'll call the mount ubuntu for simplicity) sudo mkdir /mnt/ubuntu sudo mount -o -loop /dev/sda3 /mnt/ubuntu Step Three: Get debootstrap sudo apt-get install debootstrap Step Four: Mount the install disk (replace ubuntu.iso with the name if your install disk) sudo mkdir /media/cdrom sudo mount -o loop ~/ubuntu.iso /media/cdrom Step Five: Install the OS using debootstrap (replace fiesty with the version you're installing and amd64 with your processor's architecture) sudo debootstrap --arch amd64 fiesty /mnt/ubuntu file:/media/cdrom The settings here varies. While I loaded debootstrap using an install iso, you can also have debootstrap automatically download and install if with a repository link (While most of these repositories contain debian versions I'm still not clear as to whether Ubuntu has similar repositories). Here a list of the debian package repositories and their mirrors. This is how you'd deploy debootstrap if you were doing it directly from a repository: sudo debootstrap --arch amd64 squeeze /mnt/debian http://ftp.us.debian.org/debian Here's the link that I primarily used to figure this out.

    Read the article

< Previous Page | 191 192 193 194 195 196 197 198 199 200 201 202  | Next Page >