Search Results

Search found 9992 results on 400 pages for 'space efficiency'.

Page 361/400 | < Previous Page | 357 358 359 360 361 362 363 364 365 366 367 368  | Next Page >

  • Investigating on xVelocity (VertiPaq) column size

    - by Marco Russo (SQLBI)
      In January I published an article about how to optimize high cardinality columns in VertiPaq. In the meantime, VertiPaq has been rebranded to xVelocity: the official name is now “xVelocity in-memory analytics engine (VertiPaq)” but using xVelocity and VertiPaq when we talk about Analysis Services has the same meaning. In this post I’ll show how to investigate on columns size of an existing Tabular database so that you can find the most important columns to be optimized. A first approach can be looking in the DataDir of Analysis Services and look for the folder containing the database. Then, look for the biggest files in all subfolders and you will find the name of a file that contains the name of the most expensive column. However, this heuristic process is not very optimized. A better approach is using a DMV that provides the exact information. For example, by using the following query (open SSMS, open an MDX query on the database you are interested to and execute it) you will see all database objects sorted by used size in a descending way. SELECT * FROM $SYSTEM.DISCOVER_STORAGE_TABLE_COLUMN_SEGMENTS ORDER BY used_size DESC You can look at the first rows in order to understand what are the most expensive columns in your tabular model. The interesting data provided are: TABLE_ID: it is the name of the object – it can be also a dictionary or an index COLUMN_ID: it is the column name the object belongs to – you can also see ID_TO_POS and POS_TO_ID in case they refer to internal indexes RECORDS_COUNT: it is the number of rows in the column USED_SIZE: it is the used memory for the object By looking at the ration between USED_SIZE and RECORDS_COUNT you can understand what you can do in order to optimize your tabular model. Your options are: Remove the column. Yes, if it contains data you will never use in a query, simply remove the column from the tabular model Change granularity. If you are tracking time and you included milliseconds but seconds would be enough, round the data source column to the nearest second. If you have a floating point number but two decimals are good enough (i.e. the temperature), round the number to the nearest decimal is relevant to you. Split the column. Create two or more columns that have to be combined together in order to produce the original value. This technique is described in VertiPaq optimization article. Sort the table by that column. When you read the data source, you might consider sorting data by this column, so that the compression will be more efficient. However, this technique works better on columns that don’t have too many distinct values and you will probably move the problem to another column. Sorting data starting from the lower density columns (those with a few number of distinct values) and going to higher density columns (those with high cardinality) is the technique that provides the best compression ratio. After the optimization you should be able to reduce the used size and improve the count/size ration you measured before. If you are interested in a longer discussion about internal storage in VertiPaq and you want understand why this approach can save you space (and time), you can attend my 24 Hours of PASS session “VertiPaq Under the Hood” on March 21 at 08:00 GMT.

    Read the article

  • Viewport / Camera Calculation in 2D Game

    - by Dave
    we have a 2D game with some sprites and tiles and some kind of camera/viewport, that "moves" around the scene. so far so good, if we wouldn't had some special behaviour for your camera/viewport translation. normally you could stick the camera to your player figure and center it, resulting in a very cheap, undergraduate, translation equation, like : vec_translation -/+= speed (depending in what keys are pressed. WASD as default.) buuuuuuuuuut, we want our player figure be able to actually reach the bounds, when the viewport/camera has reached a maximum translation. we came up with the following solution (only keys a and d are the shown here, the rest is just adaption of calculation or maybe YOUR super-cool and elegant solution :) ): if(keys[A]) { playerX -= speed; if(playerScreenX <= width / 2 && tx < 0) { playerScreenX = width / 2; tx += speed; } else if(playerScreenX <= width / 2 && (tx) >= 0) { playerScreenX -= speed; tx = 0; if(playerScreenX < 0) playerScreenX = 0; } else if(playerScreenX >= width / 2 && (tx) < 0) { playerScreenX -= speed; } } if(keys[D]) { playerX += speed; if(playerScreenX >= width / 2 && (-tx + width) < sceneWidth) { playerScreenX = width / 2; tx -= speed; } if(playerScreenX >= width / 2 && (-tx + width) >= sceneWidth) { playerScreenX += speed; tx = -(sceneWidth - width); if(playerScreenX >= width - player.width) playerScreenX = width - player.width; } if(playerScreenX <= width / 2 && (-tx + width) < sceneWidth) { playerScreenX += speed; } } i think the code is rather self explaining: keys is a flag container for currently active keys, playerX/-Y is the position of the player according to world origin, tx/ty are the translation components vital to background / npc / item offset calculation, playerOnScreenX/-Y is the actual position of the player figure (sprite) on screen and width/height are the dimensions of the camera/viewport. this all looks quite nice and works well, but there is a very small and nasty calculation error, which in turn sums up to some visible effect. let's consider following piece of code: if(playerScreenX <= width / 2 && tx < 0) { playerScreenX = width / 2; tx += speed; } it can be translated into plain english as : if the x position of your player figure on screen is less or equal the half of your display / camera / viewport size AND there is enough space left LEFT of your viewport/camera then set players x position on screen to width half, increase translation (because we subtract the translation from something we want to move). easy, right?! doing this will create a small delta between playerX and playerScreenX. after so much talking, my question appears now here at the bottom of this document: how do I stick the calculation of my player-on-screen to the actual position of the player AND having a viewport that is not always centered aroung the players figure? here is a small test-case in processing: http://pastebin.com/bFaTauaa thank you for reading until now and thank you in advance for probably answering my question.

    Read the article

  • AI to move custom-shaped spaceships (shape affecting movement behaviour)

    - by kaoD
    I'm designing a networked turn based 3D-6DOF space fleet combat strategy game which relies heavily on ship customization. Let me explain the game a bit, since you need to know a bit about it to set the question. What I aim for is the ability to create your own fleet of ships with custom shapes and attached modules (propellers, tractor beams...) which would give advantages and disadvantages to each ship, so you have lots of different fleet distributions. E.g., long ship with two propellers at the side would let the ship spin around that plane easily, bigger ships would move slowly unless you place lots of propellers at the back (therefore spending more "construction" points and energy when moving, and it will only move fast towards that direction.) I plan to balance all the game around this feature. The game would revolve around two phases: orders and combat phase. During the orders phase, you command the different ships. When all players finish the order phase, the combat phase begins and the ship orders get resolved in real-time for some time, then the action pauses and there's a new orders phase. The problem comes when I think about player input. To move a ship, you need to turn on or off different propellers if you want to steer, travel forward, brake, rotate in place... These propellers don't have to work at their whole power, so you can achieve more movement combinations with less propellers. I think this approach is a bit boring. The player doesn't want to fiddle with motors or anything, you just want to MOVE and KILL. The way I intend the player to give orders to these ships is by a destination and a rotation, and then the AI would calculate the correct propeller power to achive that movement and rotation. Propulsion doesn't have to be the same throught the entire turn calculation (after the orders have been given) so it would be cool if the ships reacted as they move, adjusting the power of the propellers for their needs dynamically, but it may be too hard to implement and it's not really needed for the game to work. In both cases, how would that AI decide which propellers to activate for the best (or at least not worst) trajectory to be achieved? I though about some approaches: Learning AI: The ship types would learn about their movement by trial and error, adjusting their behaviour with more uses, and finally becoming "smart". I don't want to get involved THAT far in AI coding, and I think it can be frustrating for the player (even if you can let it learn without playing.) Pre-calculated timestep movement: Upon ship creation, ALL possible movements are calculated for each propeller configuration and power for a given delta-time. Memory intensive, ugly, bad. Pre-calculated trajectories: The same as above but not for each delta-time but the whole trajectory, which would then be fitted as much as possible. Requires a fixed propeller configuration for the whole combat phase and is still memory intensive, ugly and bad. Continuous brute forcing: The AI continously checks ALL possible propeller configurations throughout the entire combat phase, precalculates a few time steps and decides which is the best one based on that. Con: what's good now might not be that good later, and it's too CPU intensive, ugly, and bad too. Single brute forcing: Same as above, but only brute forcing at the beginning of the simulation, so it needs constant propeller configuration throughout the entire combat phase. Coninuous angle check: This is not a full movement method, but maybe a way to discard "stupid" propeller configurations. Given the current propeller's normal vector and the final one, you can approximate the power needed for the propeller based on the angle. You must do this continuously throughout the whole combat phase. I figured this one out recently so I didn't put in too much thought. A priori, it has the "what's good now might not be that good later" drawback too, and it doesn't care about the other propellers which may act together to make a better propelling configuration. I'm really stuck here. Any ideas?

    Read the article

  • Why won't my Broadcom BCM4312 LP-PHY work with the STA driver?

    - by Jackson Taylor
    I tried the steps here for a 4312: https://help.ubuntu.com/community/WifiDocs/Driver/bcm43xx Both of these: sudo modprobe -r b43 ssb wl sudo modprobe wl return: FATAL: Module wl not found. FATAL: Error running install command for wl (this one is only for the second one actually) I tried the broadcom-sta, didn't work. What's confusing is down below in the next steps for STA with internet access it says to use the bcmwl one. So I install that and it succeeds but with some errors: sudo apt-get install bcmwl-kernel-source Reading package lists... Done Building dependency tree Reading state information... Done The following package was automatically installed and is no longer required: module-assistant Use 'apt-get autoremove' to remove it. The following NEW packages will be installed: bcmwl-kernel-source 0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. Need to get 0 B/1,181 kB of archives. After this operation, 3,609 kB of additional disk space will be used. Selecting previously unselected package bcmwl-kernel-source. (Reading database ... 168005 files and directories currently installed.) Unpacking bcmwl-kernel-source (from .../bcmwl-kernel-source_5.100.82.112+bdcom-0ubuntu3_amd64.deb) ... Setting up bcmwl-kernel-source (5.100.82.112+bdcom-0ubuntu3) ... Loading new bcmwl-5.100.82.112+bdcom DKMS files... Building only for 3.5.0-21-generic Building for architecture x86_64 Module build for the currently running kernel was skipped since the kernel source for this kernel does not seem to be installed. ERROR: Module b43 does not exist in /proc/modules ERROR: Module b43legacy does not exist in /proc/modules ERROR: Module ssb does not exist in /proc/modules ERROR: Module bcm43xx does not exist in /proc/modules ERROR: Module brcm80211 does not exist in /proc/modules ERROR: Module brcmfmac does not exist in /proc/modules ERROR: Module brcmsmac does not exist in /proc/modules ERROR: Module bcma does not exist in /proc/modules FATAL: Module wl not found. FATAL: Error running install command for wl update-initramfs: deferring update (trigger activated) Processing triggers for initramfs-tools ... update-initramfs: Generating /boot/initrd.img-3.5.0-21-generic jtaylor991@jtaylor991-whiteHP:~$ sudo modprobe wl FATAL: Module wl not found. FATAL: Error running install command for wl Then I do the modprobe wl commands listed above and it gives the above listed errors. It didn't work with the broadcom-sta driver either. I installed the b43 ones but nothing happened, and I don't know why so those are still installed. firmware-b43legacy-installer, b43-fwcutter and firmware-b43-lpphy-installer (yes it is a LP-PHY) are currently installed. If I go into System Settings Software Sources Additional Drivers it says "Using Broadcom 802.11 Linux STA wireless driver source from bcmwl-kernel-source (proprietary) But bcmwl-kernel-source isn't installed. I could try again but I remember rebooting and it still said this. What's funny is it found wireless networks during the Ubuntu setup/installation, I don't remember if I got it to connect or not though. I think it kept asking for a password when I put it in (yes it was right I showed password and looked at it) so I just ignored it. But right now the Enable Wireless option in the top right is just gone, it's just Enable Networking and I'm on ethernet on this HP Pavilion dv4-1435dx right here. If I run rfkill list it shows: 0: hp-wifi: Wireless LAN Soft blocked: no Hard blocked: no It was hard blocked at the beginning but unblocking it makes no change. Also it's a touch sensitive button, and it appears to be always orange no matter if it's enabled or not because when I touch it the hard blocked changes between yes and no in rfkill list. I think it was blue for a minute at one point. What is going on?!?! Help me! Lol, thanks for any and all of your time guys. Oh yeah this is Ubuntu 12.10 fresh install.

    Read the article

  • Cant finish upgrade from 11.10 to 12 on VPS based on Parallels Virtuozzo Containers, due to libc6

    - by Carmageddon
    I was stuck with this problem near the end of an upgrade: WARNING: this version of the GNU libc requires kernel version 2.6.24 or later. Please upgrade your kernel before installing glibc. The installation of a 2.6 kernel could ask you to install a new libc first, this is NOT a bug, and should NOT be reported. In that case, please add lenny sources to your /etc/apt/sources.list and run: apt-get install -t lenny linux-image-2.6 Their suggested stepds dont work on VPS, and after googling, I came up to this: Why did my upgrade to 12.04 fail with "glibc not found" or "libc6" or "requires kernel 2.6.24" error? There is comment by izx which explains my problem and proposes a workaround (might take a while to convince the guys to upgrade the kernel..). However, when I follow his instructions, I get error: # apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: libc-dev-bin libc6 libc6-dev libnih1 Suggested packages: glibc-doc The following packages will be upgraded: libc-dev-bin libc6 libc6-dev libnih1 4 upgraded, 0 newly installed, 0 to remove and 394 not upgraded. 1 not fully installed or removed. Need to get 0 B/7737 kB of archives. After this operation, 233 kB disk space will be freed. Do you want to continue [Y/n]? y locale: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.15' not found (required by locale) locale: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.14' not found (required by locale) Preconfiguring packages ... (Reading database ... 35175 files and directories currently installed.) Preparing to replace libc6-dev 2.13-20ubuntu5.2 (using .../libc6-dev_2.15-0ubuntu10.3_amd64.deb) ... Unpacking replacement libc6-dev ... Preparing to replace libc-dev-bin 2.13-20ubuntu5.2 (using .../libc-dev-bin_2.15-0ubuntu10.3_amd64.deb) ... Unpacking replacement libc-dev-bin ... Preparing to replace libc6 2.13-20ubuntu5.2 (using .../libc6_2.15-0ubuntu10.3_amd64.deb) ... locale: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.15' not found (required by locale) locale: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.14' not found (required by locale) Checking for services that may need to be restarted... Checking init scripts... runlevel:/var/run/utmp: No such file or directory Checking for services that may need to be restarted... Checking init scripts... runlevel:/var/run/utmp: No such file or directory WARNING: init script for samba not found. Stopping some services possibly affected by the upgrade (will be restarted later): cron: stopping...done. WARNING: this version of the GNU libc requires kernel version 2.6.24 or later. Please upgrade your kernel before installing glibc. The installation of a 2.6 kernel _could_ ask you to install a new libc first, this is NOT a bug, and should *NOT* be reported. In that case, please add lenny sources to your /etc/apt/sources.list and run: apt-get install -t lenny linux-image-2.6 Then reboot into this new kernel, and proceed with your upgrade dpkg: error processing /var/cache/apt/archives/libc6_2.15-0ubuntu10.3_amd64.deb (--unpack): subprocess new pre-installation script returned error exit status 1 Processing triggers for man-db ... locale: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.15' not found (required by locale) locale: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.14' not found (required by locale) Errors were encountered while processing: /var/cache/apt/archives/libc6_2.15-0ubuntu10.3_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) I also attempted to manually grab the .deb package and install it using dpkg -i, but getting: locale: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.15' not found (required by locale) Even though the file is: libc-bin_2.15-0ubuntu10+openvz0_amd64.deb

    Read the article

  • Notes - Part II - Play with JavaFX

    - by Silviu Turuga
    Open the project from last lesson Double click on NotesUI.fmxl, this will open the JavaFX Scene Builder On the left side you have a area called Hierarchy, from there press Del or Shift+Backspace on Mac to delete the Button and the Label. You'll receive a warning, that some components have been assigned an fx:id, click Delete as we don't need them anymore. Resize the AnchorPane to have enough room for our design, eg. 820x550px From the top left pick the Container called Accordion and drag over the AnchorPane design Chose then from Controls a List View and drag inside the Accordion. You'll notice that by default the Accordion has 2 TitledPane, and you can switch between them by clicking on their name. I'll let you the pleasure to do the rest in order to get the following result  Here is the list of objects used Save it and then return to NetBeans Run the application and it should be run without any issue. If you click on buttons they all are functional, but nothing happens as we didn't link them with any action. We'll see this in the next episode. Now, let's play a little bit with the application and try to resize it… Have you notice the behavior? If the form is too small, some objects aren't visible, if it is too large there is too much space . That's for sure something that your users won't like and you as a programmer have to care about this. From NetBeans double click NotesUI.fmxl so to return back to JavaFX Scene Builder Select the TextField from bottom left of Notes, the one where I put the text Category and then from the right part of JavaFX Scene Builder you'll notice a panel called Inspector. Chose Layout and then click on the dotted lines from left and bottom of the square, like you see in the below image This will make the textfield to have always the same distance from left and bottom no matter the size of the form. Save and run the application. Note that whenever the form is changing the Height, the Category TextField has the same distance from the bottom. Select Accordion and do the same steps but also check the top dotted line, because we want the Accordion to have the same height as the main form has. I'll let you the pleasure to do the same for the rest of components. It's very important to design an application that can be resize by user and in the same time, all the buttons are on place. Last step is to make sure our application is not getting smaller then a certain size, as this will hide parts of our layout. So select the AnchorPane and from Inspector go to Layout and note down the Width and Height. Go back to NetBeans and open the file Main.java and add the following code just after stage.setScene(scene); (around line 26) stage.setMinWidth(820); stage.setMinHeight(550); Use your own width and height. This will prevent user to reduce the width or height of your application to a value that will hide parts of your layout. So now you should have done most of the design part and next time we'll see how can we enter some data into our newly created application… Note: in case you miss something, here are the source files of the project till this point. 

    Read the article

  • How to fix: Ubuntu 12.04 reboots after loading with elilo

    - by Casey
    I have an HP p6-2120 with CPU: AMD A6-3620 APU with Radeon Graphics RAM: 6GB BIOS: HO2_710.ROM v7.10 [AMI v7.10 4/19/2012] Disk: SATA1 (/dev/sda) - 1 TB (windows) Disk: SATA2 (/dev/sdb) - 1 TB partitioned using "parted -a optimal /dev/sdb" as follows: .. 1049KB 201MB FAT32 boot flag set .. 201MB 60GB ext2 (/) .. 68GB 78GB linux-swap(v1) (swap) .. 78GB 790GB ext4 (/home) .. - rest is "free" space reserved for other purposes (eventually) ubuntu: 12.04.1 LTS [specifically: Release 12.04 (precise) 64-bit] kernel: linux 3.2.0-29-generic I created a bootable EFI USB from the ISO (64-bit) which I downloaded. I can run and install from the USB without any problems. The BIOS is an EFI bios that appears to be capable of booting in either EFI or Legacy mode. Initially, I did the "standard" install with NOTHING on disk2, and let the installer configure everything. The net result of this was that when I started the computer and forced it into "boot" menu mode, it DOES NOT recognize SATA2 as an EFI drive, and when I attempt to "legacy" boot from it, I get the message "ERROR: No Boot Disk has been detected." The "standard" install created one large partition that consumed the entire disk. At that point, I manually partitioned the disk (using sudo parted -a optimal /dev/sdb) as described above. I selected the "other" install, and changed the /dev/sdb1 to "bios_grub", /dev/sdb2 as "/" (ext4), /dev/sdb3 as swap, and /dev/sdb4 as "/home". [Note: fearing that possibly elilo did not recognize ext4, I switched /dev/sdb2 to ext2 and re-insalled] The net result was that the install appeared to trash the /dev/sdb1 partition so that it was NOT readable by anything. I re-formated /dev/sdb1 as FAT32 and set the boot flag. I repeated the install ignoring the messages about no bios_grub partition. After several attempts to get GRUB2 to work, I switched to elilo. I downloaded the most recent version and copied it (elilo-3.14-ia64.efi) to /dev/sdb1/efi/boot/bootx64.efi. (The BIOS boot loader did not recognize it either as elilo-3.14.ia64.efi or as elilo.efi. Based on the advice in one of the web-pages I found, I renamed it to bootx64.efi. This worked.) In that same directory (/efi/boot), I copied the file pointed to the link in /dev/sdb2/vmlinuz to /efi/boot/vmlinuz, and the file pointed to the link in /dev/sdb2/initrd.img to /efi/boot/initrd.img. I created an elilo.conf file as follows: timeout=5000 prompt default=linux-boot image=vmlinuz label=linux-boot read-only initrd=initrd.img root=/dev/sdb2 The /efi/boot directory contains 4 files: bootx64.efi elilo.conf vmlinuz initrd.img When I power-cycle the computer and force the boot menu, drive2 shows up as an EFI bootable drive. When I select it, I get the elilo prompt. Pressing , it appears to load the kernal (I have tried it with verbose=5, and there is a long string of messages with the final one a command line to load the kernel and a series of several dots that fly by) then the screen goes blank, and it reboots the computer. [Note: I have also tried substituting the UUID as found in the /etc/fstab of the installed system for the root directory. This had no effect.] This is a brief synopsis of several nights of fiddling with this. I would deeply appreciate any help you can give.

    Read the article

  • ConcurrentDictionary<TKey,TValue> used with Lazy<T>

    - by Reed
    In a recent thread on the MSDN forum for the TPL, Stephen Toub suggested mixing ConcurrentDictionary<T,U> with Lazy<T>.  This provides a fantastic model for creating a thread safe dictionary of values where the construction of the value type is expensive.  This is an incredibly useful pattern for many operations, such as value caches. The ConcurrentDictionary<TKey, TValue> class was added in .NET 4, and provides a thread-safe, lock free collection of key value pairs.  While this is a fantastic replacement for Dictionary<TKey, TValue>, it has a potential flaw when used with values where construction of the value class is expensive. The typical way this is used is to call a method such as GetOrAdd to fetch or add a value to the dictionary.  It handles all of the thread safety for you, but as a result, if two threads call this simultaneously, two instances of TValue can easily be constructed. If TValue is very expensive to construct, or worse, has side effects if constructed too often, this is less than desirable.  While you can easily work around this with locking, Stephen Toub provided a very clever alternative – using Lazy<TValue> as the value in the dictionary instead. This looks like the following.  Instead of calling: MyValue value = dictionary.GetOrAdd( key, () => new MyValue(key)); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } We would instead use a ConcurrentDictionary<TKey, Lazy<TValue>>, and write: MyValue value = dictionary.GetOrAdd( key, () => new Lazy<MyValue>( () => new MyValue(key))) .Value; This simple change dramatically changes how the operation works.  Now, if two threads call this simultaneously, instead of constructing two MyValue instances, we construct two Lazy<MyValue> instances. However, the Lazy<T> class is very cheap to construct.  Unlike “MyValue”, we can safely afford to construct this twice and “throw away” one of the instances. We then call Lazy<T>.Value at the end to fetch our “MyValue” instance.  At this point, GetOrAdd will always return the same instance of Lazy<MyValue>.  Since Lazy<T> doesn’t construct the MyValue instance until requested, the actual MyClass instance returned is only constructed once.

    Read the article

  • C# 5 Async, Part 3: Preparing Existing code For Await

    - by Reed
    While the Visual Studio Async CTP provides a fantastic model for asynchronous programming, it requires code to be implemented in terms of Task and Task<T>.  The CTP adds support for Task-based asynchrony to the .NET Framework methods, and promises to have these implemented directly in the framework in the future.  However, existing code outside the framework will need to be converted to using the Task class prior to being usable via the CTP. Wrapping existing asynchronous code into a Task or Task<T> is, thankfully, fairly straightforward.  There are two main approaches to this. Code written using the Asynchronous Programming Model (APM) is very easy to convert to using Task<T>.  The TaskFactory class provides the tools to directly convert APM code into a method returning a Task<T>.  This is done via the FromAsync method.  This method takes the BeginOperation and EndOperation methods, as well as any parameters and state objects as arguments, and returns a Task<T> directly. For example, we could easily convert the WebRequest BeginGetResponse and EndGetResponse methods into a method which returns a Task<WebResponse> via: Task<WebResponse> task = Task.Factory .FromAsync<WebResponse>( request.BeginGetResponse, request.EndGetResponse, null); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Event-based Asynchronous Pattern (EAP) code can also be wrapped into a Task<T>, though this requires a bit more effort than the one line of code above.  This is handled via the TaskCompletionSource<T> class.  MSDN provides a detailed example of using this to wrap an EAP operation into a method returning Task<T>.  It demonstrates handling cancellation and exception handling as well as the basic operation of the asynchronous method itself. The basic form of this operation is typically: Task<YourResult> GetResultAsync() { var tcs = new TaskCompletionSource<YourResult>(); // Handle the event, and setup the task results... this.GetResultCompleted += (o,e) => { if (e.Error != null) tcs.TrySetException(e.Error); else if (e.Cancelled) tcs.TrySetCanceled(); else tcs.TrySetResult(e.Result); }; // Call the asynchronous method this.GetResult(); // Return the task from the TaskCompletionSource return tcs.Task; } We can easily use these methods to wrap our own code into a method that returns a Task<T>.  Existing libraries which cannot be edited can be extended via Extension methods.  The CTP uses this technique to add appropriate methods throughout the framework. The suggested naming for these methods is to define these methods as “Task<YourResult> YourClass.YourOperationAsync(…)”.  However, this naming often conflicts with the default naming of the EAP.  If this is the case, the CTP has standardized on using “Task<YourResult> YourClass.YourOperationTaskAsync(…)”. Once we’ve wrapped all of our existing code into operations that return Task<T>, we can begin investigating how the Async CTP can be used with our own code.

    Read the article

  • Omni-directional light shadow mapping with cubemaps in WebGL

    - by Winged
    First of all I must say, that I have read a lot of posts describing an usage of cubemaps, but I'm still confused about how to use them. My goal is to achieve a simple omni-directional (point) light type shading in my WebGL application. I know that there is a lot more techniques (like using Two-Hemispheres or Camera Space Shadow Mapping) which are way more efficient, but for an educational purpose cubemaps are my primary goal. Till now, I have adapted a simple shadow mapping which works with spotlights (with one exception: I don't know how to cut off the glitchy part beyond the reach of a single shadow map texture): glitchy shadow mapping<<< So for now, this is how I understand the usage of cubemaps in shadow mapping: Setup a framebuffer (in case of cubemaps - 6 framebuffers; 6 instead of 1 because every usage of framebufferTexture2D slows down an execution which is nicely described here <<<) and a texture cubemap. Also in WebGL depth components are not well supported, so I need to render it to RGBA first. this.texture = gl.createTexture(); gl.bindTexture(gl.TEXTURE_CUBE_MAP, this.texture); gl.texParameteri(gl.TEXTURE_CUBE_MAP, gl.TEXTURE_MIN_FILTER, gl.LINEAR); gl.texParameteri(gl.TEXTURE_CUBE_MAP, gl.TEXTURE_MAG_FILTER, gl.LINEAR); for (var face = 0; face < 6; face++) gl.texImage2D(gl.TEXTURE_CUBE_MAP_POSITIVE_X + face, 0, gl.RGBA, this.size, this.size, 0, gl.RGBA, gl.UNSIGNED_BYTE, null); gl.bindTexture(gl.TEXTURE_CUBE_MAP, null); this.framebuffer = []; for (face = 0; face < 6; face++) { this.framebuffer[face] = gl.createFramebuffer(); gl.bindFramebuffer(gl.FRAMEBUFFER, this.framebuffer[face]); gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.TEXTURE_CUBE_MAP_POSITIVE_X + face, this.texture, 0); gl.framebufferRenderbuffer(gl.FRAMEBUFFER, gl.DEPTH_ATTACHMENT, gl.RENDERBUFFER, this.depthbuffer); var e = gl.checkFramebufferStatus(gl.FRAMEBUFFER); // Check for errors if (e !== gl.FRAMEBUFFER_COMPLETE) throw "Cubemap framebuffer object is incomplete: " + e.toString(); } Setup the light and the camera (I'm not sure if should I store all of 6 view matrices and send them to shaders later, or is there a way to do it with just one view matrix). Render the scene 6 times from the light's position, each time in another direction (X, -X, Y, -Y, Z, -Z) for (var face = 0; face < 6; face++) { gl.bindFramebuffer(gl.FRAMEBUFFER, shadow.buffer.framebuffer[face]); gl.viewport(0, 0, shadow.buffer.size, shadow.buffer.size); gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT); camera.lookAt( light.position.add( cubeMapDirections[face] ) ); scene.draw(shadow.program); } In a second pass, calculate the projection a a current vertex using light's projection and view matrix. Now I don't know If should I calculate 6 of them, because of 6 faces of a cubemap. ScaleMatrix pushes the projected vertex into the 0.0 - 1.0 region. vDepthPosition = ScaleMatrix * uPMatrixFromLight * uVMatrixFromLight * vWorldVertex; In a fragment shader calculate the distance between the current vertex and the light position and check if it's deeper then the depth information read from earlier rendered shadow map. I know how to do it with a 2D Texture, but I have no idea how should I use cubemap texture here. I have read that texture lookups into cubemaps are performed by a normal vector instead of a UV coordinate. What vector should I use? Just a normalized vector pointing to the current vertex? For now, my code for this part looks like this (not working yet): float shadow = 1.0; vec3 depth = vDepthPosition.xyz / vDepthPosition.w; depth.z = length(vWorldVertex.xyz - uLightPosition) * linearDepthConstant; float shadowDepth = unpack(textureCube(uDepthMapSampler, vWorldVertex.xyz)); if (depth.z > shadowDepth) shadow = 0.5; Could you give me some hints or examples (preferably in WebGL code) how I should build it?

    Read the article

  • Multithreading 2D gravity calculations

    - by Postman
    I'm building a space exploration game and I've currently started working on gravity ( In C# with XNA). The gravity still needs tweaking, but before I can do that, I need to address some performance issues with my physics calculations. This is using 100 objects, normally rendering 1000 of them with no physics calculations gets well over 300 FPS (which is my FPS cap), but any more than 10 or so objects brings the game (and the single thread it runs on) to its knees when doing physics calculations. I checked my thread usage and the first thread was killing itself from all the work, so I figured I just needed to do the physics calculation on another thread. However when I try to run the Gravity.cs class's Update method on another thread, even if Gravity's Update method has nothing in it, the game is still down to 2 FPS. Gravity.cs public void Update() { foreach (KeyValuePair<string, Entity> e in entityEngine.Entities) { Vector2 Force = new Vector2(); foreach (KeyValuePair<string, Entity> e2 in entityEngine.Entities) { if (e2.Key != e.Key) { float distance = Vector2.Distance(entityEngine.Entities[e.Key].Position, entityEngine.Entities[e2.Key].Position); if (distance > (entityEngine.Entities[e.Key].Texture.Width / 2 + entityEngine.Entities[e2.Key].Texture.Width / 2)) { double angle = Math.Atan2(entityEngine.Entities[e2.Key].Position.Y - entityEngine.Entities[e.Key].Position.Y, entityEngine.Entities[e2.Key].Position.X - entityEngine.Entities[e.Key].Position.X); float mult = 0.1f * (entityEngine.Entities[e.Key].Mass * entityEngine.Entities[e2.Key].Mass) / distance * distance; Vector2 VecForce = new Vector2((float)Math.Cos(angle), (float)Math.Sin(angle)); VecForce.Normalize(); Force = Vector2.Add(Force, VecForce * mult); } } } entityEngine.Entities[e.Key].Position += Force; } } Yeah, I know. It's a nested foreach loop, but I don't know how else to do the gravity calculation, and this seems to work, it's just so intensive that it needs its own thread. (Even if someone knows a super efficient way to do these calculations, I'd still like to know how I COULD do it on multiple threads instead) EntityEngine.cs (manages an instance of Gravity.cs) public class EntityEngine { public Dictionary<string, Entity> Entities = new Dictionary<string, Entity>(); public Gravity gravity; private Thread T; public EntityEngine() { gravity = new Gravity(this); } public void Update() { foreach (KeyValuePair<string, Entity> e in Entities) { Entities[e.Key].Update(); } T = new Thread(new ThreadStart(gravity.Update)); T.IsBackground = true; T.Start(); } } EntityEngine is created in Game1.cs, and its Update() method is called within Game1.cs. I need my physics calculation in Gravity.cs to run every time the game updates, in a separate thread so that the calculation doesn't slow the game down to horribly low (0-2) FPS. How would I go about making this threading work? (any suggestions for an improved Planetary Gravity system are welcome if anyone has them) I'm also not looking for a lesson in why I shouldn't use threading or the dangers of using it incorrectly, I'm looking for a straight answer on how to do it. I've already spent an hour googling this very question with little results that I understood or were helpful. I don't mean to come off rude, but it always seems hard as a programming noob to get a straight meaningful answer, I usually rather get an answer so complex I'd easily be able to solve my issue if I understood it, or someone saying why I shouldn't do what I want to do and offering no alternatives (that are helpful). Thank you for the help!

    Read the article

  • Free hosting solution for a very low-traffic website [duplicate]

    - by user966939
    This question already has an answer here: How to find web hosting that meets my requirements? 4 answers I run a very low-traffic website (about 40 users, basically all of which are daily active on the site). I don't see it changing anytime soon either, as there is no way to sign up on the site right now. Until now I have just been using a sub-directory on a friend's host (shared), to host the web site. But in only a few weeks from now, his subscription will end, and he has no plans on renewing it. So of course this means I'll have to move on to something else. But I don't think I'll find someone who'd be willing to share a... shared host with me again. And besides, the software used on that server is ancient (PHP 4.4.9 + MySQL 4.1.22). There's one obvious solution that comes to mind, I guess: choose a better host and pay for it myself. The problem here is that I have no real fixed income, as I'm only a student. So even if the pricing is dirt cheap, I just can't be certain I will be able to afford it, every single month, for... at least 2 years maybe? So I've looked at free hosting solutions instead. The least requirement I had was that it was completely free of ads. But no matter where I look, I always find something in a corner or two ("what can you expect from a free host?" - yeah I know, but I guess it was worth a shot). For example, on Byethost (one of the free hosts I tried), if you trigger a PHP error while error reporting is set to E_ALL, you will spawn some hidden ad... Besides Byethost, I've tried 000Webhost, x10Hosting, 2Freehosting/1Freehosting, Wink.ws, and they are only worse. Okay, I'm running low on ideas. But! What if I just hosted the site myself, on my own computer? That could work. I actually do have my computer on practically 24/7. But not really. Sometimes I need to reboot it, and sometimes we even have power outages. And what if the hardware needs an upgrade? It's not such a big deal for me if the site went down, because I know what's going on; but what about the users? If I do decide to host it myself, is there some way to show users an alternate page instead of them just seeing a generic "server not found" page in the browser when the site is not accessible? Or is there something I have been missing out on? Is there a different kind of "web hosting" solution out there that I haven't heard of? Here is what I'm really looking for: Free (as in, no costs) NO ads Bandwidth enough for a low-traffic forum with roughly 40 users (Semi-)Up-to-date PHP and MySQL (at least not older than a year) No standard (non-extension) PHP functions turned off - such as sleep() The mbstring extension is enabled Disk space: at least 5 MB At least one MySQL database Some bonus points would be: Max execution time of PHP scripts can be set Remote access to MySQL database What would be the best solution for me? Is there one?

    Read the article

  • Investigating on xVelocity (VertiPaq) column size

    - by Marco Russo (SQLBI)
      In January I published an article about how to optimize high cardinality columns in VertiPaq. In the meantime, VertiPaq has been rebranded to xVelocity: the official name is now “xVelocity in-memory analytics engine (VertiPaq)” but using xVelocity and VertiPaq when we talk about Analysis Services has the same meaning. In this post I’ll show how to investigate on columns size of an existing Tabular database so that you can find the most important columns to be optimized. A first approach can be looking in the DataDir of Analysis Services and look for the folder containing the database. Then, look for the biggest files in all subfolders and you will find the name of a file that contains the name of the most expensive column. However, this heuristic process is not very optimized. A better approach is using a DMV that provides the exact information. For example, by using the following query (open SSMS, open an MDX query on the database you are interested to and execute it) you will see all database objects sorted by used size in a descending way. SELECT * FROM $SYSTEM.DISCOVER_STORAGE_TABLE_COLUMN_SEGMENTS ORDER BY used_size DESC You can look at the first rows in order to understand what are the most expensive columns in your tabular model. The interesting data provided are: TABLE_ID: it is the name of the object – it can be also a dictionary or an index COLUMN_ID: it is the column name the object belongs to – you can also see ID_TO_POS and POS_TO_ID in case they refer to internal indexes RECORDS_COUNT: it is the number of rows in the column USED_SIZE: it is the used memory for the object By looking at the ration between USED_SIZE and RECORDS_COUNT you can understand what you can do in order to optimize your tabular model. Your options are: Remove the column. Yes, if it contains data you will never use in a query, simply remove the column from the tabular model Change granularity. If you are tracking time and you included milliseconds but seconds would be enough, round the data source column to the nearest second. If you have a floating point number but two decimals are good enough (i.e. the temperature), round the number to the nearest decimal is relevant to you. Split the column. Create two or more columns that have to be combined together in order to produce the original value. This technique is described in VertiPaq optimization article. Sort the table by that column. When you read the data source, you might consider sorting data by this column, so that the compression will be more efficient. However, this technique works better on columns that don’t have too many distinct values and you will probably move the problem to another column. Sorting data starting from the lower density columns (those with a few number of distinct values) and going to higher density columns (those with high cardinality) is the technique that provides the best compression ratio. After the optimization you should be able to reduce the used size and improve the count/size ration you measured before. If you are interested in a longer discussion about internal storage in VertiPaq and you want understand why this approach can save you space (and time), you can attend my 24 Hours of PASS session “VertiPaq Under the Hood” on March 21 at 08:00 GMT.

    Read the article

  • A .NET Developers day with the iPad.

    - by mbcrump
    The Apple iPad is currently getting a lot of buzz because of the app store, the book store and of course iTunes. I had the chance to play with one and this is what I have learned about the device. Let’s get this out of the way first, the iPad is awesome. It is the device for media consumption and casual web browsing. But how does it measure up to those of us with .NET on our brains all days. Let’s find out… Main Screen – you can customize everything on this page. I guess I should replace that image with a C# or VS logo. Its pretty standard stuff if you have an iPhone.   Programming Books If you have a subscription to Safari Books Online, then you are in luck, its very easy to read the books on the iPad. Just fire up Safari web browser and goto the Safari Books Online. The biggest benefit that I can see with the iPad is the ability to read books wherever and not have to worry about purchasing books that I already have the .PDF for. Below is a sample from Code Complete 2nd Edition. Below is a PDF of the ECMA-334 C# Language Specification. As you can see its very readable and you should have no problem reading actual code.   Example of Code shown below: It is however easier to read the PDF and store them with a 3rd party PDF reader. I have seen several for .99 cents or less. You can however switch the screen to vertical to get more viewing space as shown below: I was disappointed with the iBooks application. I could not find a single .NET programming book anywhere. I was able to download the excellent sci-fi book “A memory of Wind” for free though. If I just overlooked them, then please email me with the names and titles. I couldn’t even find a technology category in the categories list. Web Surfing – Technical Sites Below is an example of my site in Safari. The code is very readable and the experience was identical to viewing it in Firefox. I tried multiple programming site and the pages looked great except those that used flash and of course it did not display on those pages.   News Apps - Technical Content The standard NY Times and USA Today looked great, but the Technical Content was lacking. It would probably be better to use Google Reader for online technical news.     YouTube Videos – Technical Content  Since its YouTube, we already know that a lot of technical content exist and it plays great on the iPad. I watched several programming videos and could clearly see the code being written. Taking Technical Notes The iPad comes with a great notepad for taking notes. I found that it was easy to take notes regarding projects that I am currently working on.   Calendar The calendar that ships with the iPad is great for organizing. You can setup exchange server or manually enter the information. Pretty standard stuff.    Random Applications that I like: TweetDeck.   and Adobe Ideas. Adobe Ideas is kinda like SketchFlow except you use your finger to mock up the sketches.  Don’t forget that the iPad is great for any type of podcasting. That pretty much sums it up, I would definitely recommend this device as it will only get better. I believe the iOS4 comes out on the 24th and the iPad will only get more and more apps. You could save a few bucks by waiting for the 2nd generation, but that’s a call that only you can make.

    Read the article

  • How can I have multiple layers in my map array?

    - by Manl400
    How do I load Levels in my game, as in Layer 1 would be Objects, Layer 2 would be Characters and so on. I only need 3 layers, and they will all be put on top of each other. i.e having a flower with a transparent background to be put on grass or dirt on the layer below.I would like to Read From the same file too. How would i go about doing this? Any help would be appreciated. I load the map from a level file which are just numbers corresponding to a tile in the tilesheet. Here is the level file [Layer1] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 [Layer2] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 [Layer3] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 And here is the code that interprets it void LoadMap(const char *filename, std::vector< std::vector <int> > &map) { std::ifstream openfile(filename); if(openfile.is_open()) { std::string line, value; int space; while(!openfile.eof()) { std::getline(openfile, line); if(line.find("[TileSet]") != std::string::npos) { state = TileSet; continue; } else if (line.find("[Layer1]") != std::string::npos) { state = Map; continue; } switch(state) { case TileSet: if(line.length() > 0) tileSet = al_load_bitmap(line.c_str()); break; case Map: std::stringstream str(line); std::vector<int> tempVector; while(!str.eof()) { std::getline(str, value, ' '); if(value.length() > 0) tempVector.push_back(atoi(value.c_str())); } map.push_back(tempVector); break; } } } else { } } and this is how it draws the map. Also the tile sheet is 1280 by 1280 and the tilesizeX and tilesizeY is 64 void DrawMap(std::vector <std::vector <int> > map) { int mapRowCount = map.size(); for(int i, j = 0; i < mapRowCount; i ++) { int mapColCount = map[i].size(); for (int j = 0; j < mapColCount; ++j) { int tilesetIndex = map[i][j]; int tilesetRow = floor(tilesetIndex / TILESET_COLCOUNT); int tilesetCol = tilesetIndex % TILESET_COLCOUNT; al_draw_bitmap_region(tileSet, tilesetCol * TileSizeX, tilesetRow * TileSizeY, TileSizeX, TileSizeY, j * TileSizeX, i * TileSizeX, NULL); } } } EDIT: http://i.imgur.com/Ygu0zRE.jpg

    Read the article

  • SharePoint Content and Site Editing Tips

    - by Bil Simser
    A few content management and site editing tips for power users on this bacon flavoured unicorn morning. The theme here is keep it clean!Write "friendly" email addressesRemember it's human beings reading your content. So seeing something like "If you have questions please send an email to [email protected]" breaks up the readiblity. Instead just do the simple steps of writing the content in plain English and going back, highlighting the name and insert a link (note: you might have to prefix the link with mailto:[email protected]). It makes for a friendlier looking page and hides the ugliness that are sometimes in email addresses.Use friendly column and list namesThis is a big pet peeve of mine. When you first create a column or list with spaces the internal name is changed. The display name might be "My Amazing List of Animals with Large Testicles" but the internal (and link) name becomes "My_x00x20_Amazing_x00x20_List_x00x20_of_x00x20_Animals_x00x20_with_x00x20_Large_x00x20_Testicles". What's worse is if you create a publishing page named "This Website is Fueled By a Dolphin's Spleen". Not only is it incorrect grammar, but the apostrophe wreaks havoc on both the internal name for the list (with lots of crazy hex codes) as well as the hyperlink (where everything is uuencoded). Instead create the list with a distinct and compact name then go back and change it to whatever you want. The end result is a better formed name that you can both script and access in code easier.Keep your Views CleanWhen you add a column to a list or create a new list the default is to add it to the default view. Do everyone a favour and don't check this box! The default view of a list should be something similar to the Title field and nothing else. Keep it clean. If you want to set a defalt view that's different, go back and create one with all the fields and filtering and sorting columns you want and set it as default. It's a good idea to keep the original AllItems.aspx (note the lack of space in the filename!) easy and unfiltered. It's also a good idea to keep your column count down in views. Don't let every column be added by default and don't add every column just because you can. Create separate views for distinct responsibilities and try to keep the number of columns down to a single screen to prevent horizontal scrolling.Simple NavigationThe Quick Launch is a great tool for navigating around your site but don't use the default of adding all lists to it. Uncheck that box and keep navigation simple. Create custom groupings that make sense so if you don't have a site with "Documents and Lists" but "Reports and Notices" makes more sense then do it. Also hide internal lists from the Quick Launch. For example, if most users don't need to see all the lookup tables you might have on a site don't show them. You can use audience filtering on the Quick Launch if you want to hide admin items from non-admin users so consider that as an option.Enjoy!

    Read the article

  • PHP-FPM stops responding and dies [migrated]

    - by user12361
    I'm running Drupal 6 with Nginx 1.5.1 and PHP-FPM (PHP 5.3.26) on a 1GB single core VPS with 3GB of swap space on SSD storage. I just switched from shared hosting to this unmanaged VPS because my site was getting too heavy, so I'm still learning the ropes. I have moderately high traffic, I don't really monitor it closely but Google Adsense usually record close to 30K page views/day. I usually have 50 to 80 authenticated users logged in and a few hundred more anonymous users hitting the Boost static HTML cache at any given moment. The problem I'm having is that PHP-FPM frequently stops responding, resulting in Nginx 502 or 504 errors. I swear I have read every page on the internet about this issue, which seems fairly common, and I've tried endless combinations of configurations, and I can't find a good solution. After restarting Nginx and PHP-FPM, the site runs really fast for a while, and then without warning it simply stops responding. I get a white screen while the browser waits on the server, and after about 30 seconds to a minute it throws an Nginx 502 or 504 error. Sometimes it runs well for 2 minutes, sometimes 5 minutes, sometimes 5 hours, but it always ends up hanging. When I find the server in this state, there is still plenty of free memory (500MB or more) and no major CPU usage, the control and worker PHP-FPM processes are still present, and the server is still pingable and usable via SSH. A reload of PHP-FPM via the init script revives it again. The hangups don't seem to correspond to the amount of traffic, because I observed this behavior consistently when I was testing this configuration on a development VPS with no traffic at all. I've been constantly tweaking the settings, but I can't definitively eliminate the problem. I set Nginx workers to just 1. In the PHP-FPM config I have tried all three of the process managers. "Dynamic" is definitely the least reliable, consistently hanging up after only a few minutes. "Static" also has been unreliable and unpredictable. The least buggy has been "ondemand", but even that is failing me, sometimes after as much as 12 to 24 hours. But I can't leave the server unattended because PHP-FPM dies and never comes back on its own. I tried adjusting the pm.max_children value from as low as 3 to as high as 50, doesn't make a lot of difference, but I currently have it at 10. Same thing for the spare servers values. I also have set pm.max_requests anywhere from 30 to unlimited, and it doesn't seem to make a difference. According to the logs, the PHP-FPM processes are not exiting with SIGSEGV or SIGBUS, but rather with SIGTERM. I get a lot of lines like: WARNING: [pool www] child 3739, script '/var/www/drupal6/index.php' (request: "GET /index.php") execution timed out (38.739494 sec), terminating and: WARNING: [pool www] child 3738 exited on signal 15 (SIGTERM) after 50.004380 seconds from start I actually found several articles that recommend doing a graceful reload of PHP-FPM via cron every few minutes or hours to circumvent this issue. So that's what I did, "/etc/init.d/php-fpm reload" every 5 minutes. So far, it's keeping the lights on. But it feels like a dreadful hack. Is PHP-FPM really that unreliable? Is there anything else I can do? Thanks a lot!

    Read the article

  • Consolidation in a Database Cloud

    - by B R Clouse
    Consolidation of multiple databases onto a shared infrastructure is the next step after Standardization.  The potential consolidation density is a function of the extent to which the infrastructure is shared.  The three models provide increasing degrees of sharing: Server: each database is deployed in a dedicated VM. Hardware is shared, but most of the software infrastructure is not. Standardization is often applied incompletely since operating environments can be moved as-is onto the shared platform. The potential for VM sprawl is an additional downside. Database: multiple database instances are deployed on a shared software / hardware infrastructure. This model is very efficient and easily implemented with the features in the Oracle Database and supporting products. Many customers have moved to this model and achieved significant, measurable benefits. Schema: multiple schemas are deployed within a single database instance. The most efficient model, it places constraints on the environment. Usually this model will be implemented only by customers deploying their own applications.  (Note that a single deployment can combine Database and Schema consolidations.) Customer value: lower costs, better system utilization In this phase of the maturity model, under-utilized hardware can be used to host more workloads, or retired and those workloads migrated to consolidation platforms. Customers benefit from higher utilization of the hardware resources, resulting in reduced data center floor space, and lower power and cooling costs. And, the OpEx savings from Standardization are multiplied, since there are fewer physical components (both hardware and software) to manage. Customer value: higher productivity The OpEx benefits from Standardization are compounded since not only are there fewer types of things to manage, now there are fewer entities to manage. In this phase, customers discover that their IT staff has time to move away from "day-to-day" tasks and start investing in higher value activities. Database users benefit from consolidating onto shared infrastructures by relieving themselves of the requirement to maintain their own dedicated servers. Also, if the shared infrastructure offers capabilities such as High Availability / Disaster Recovery, which are often beyond the budget and skillset of a standalone database environment, then moving to the consolidation platform can provide access to those capabilities, resulting in less downtime. Capabilities / Characteristics In this phase, customers will typically deploy fixed-size clusters and consolidate on a cluster until that cluster is deemed "full," at which point a new cluster is built. Customers will define one or a few cluster architectures that are used wherever possible; occasionally there may be deployments which must be handled as exceptions. The "full" policy may be based on number of databases deployed on the cluster, or observed peak workload, etc. IT will own the provisioning of new databases on a cluster, making the decision of when and where to place new workloads. Resources may be managed dynamically, e.g., as a priority workload increases, it may be given more CPU and memory to handle the spike. Users will be charged at a fixed, relatively coarse level; or in some cases, no charging will be applied. Activities / Tasks Oracle offers several tools to plan a successful consolidation. Real Application Testing (RAT) has a feature to help plan and validate database consolidations. Enterprise Manager 12c's Cloud Management Pack for Database includes a planning module. Looking ahead, customers should start planning for the Services phase by defining the Service Catalog that will be made available for database services.

    Read the article

  • GameStateManagement and inputs not being recognized

    - by Dave Voyles
    EDIT: I've removed a bit of code from the input class to make this more readable, and updated my StartScreen class, which is now at the bottom. I have the same issues though, but they are explained in my comments on the bottom of this page. It won't let me paste my additional code here (the format comes out crazy), so I've linked to pastebin with the code pastebin I've been trying to implement the MS provided GameStateManagement sample with my game, but it has proven a bit difficult. Really, I'm using Oneksoft's Starter Kit, which uses the MS provided sample, so they are identical, except for my splash screen. I'm able to get the splash screen to launch, where it informs the player to press A to advance the screen, but this doesn't seem to accept any of my inputs. I’ve also added Console.Writeline(“Pressing A”) under the IsMenuPressed method in Input.cs to verify that it is getting called, but for some reason it is constantly spamming my log, rather than just appearing each time I press it. Not sure why this is happening. I have a bit too much code to post it all here, so I’ve attached a link to my .rar with my classes, but I’ll also leave a bit here which I thinkmay be applicable. https://www.dropbox.com/sh/6ek4uru2jc2ch0k/JTeBWN_3PQ What do you guys think the issue is? namespace Pong { public class Input { public const int MaxInputs = 4; public readonly KeyboardState[] CurrentKeyboardState; public readonly GamePadState[] CurrentGamePadState; public KeyboardState[] LastKeyboardState; public GamePadState[] LastGamePadState; public readonly bool[] GamePadWasConnected; public Input() { // Get input state CurrentKeyboardState = new KeyboardState[MaxInputs]; CurrentGamePadState = new GamePadState[MaxInputs]; // Preserving last states to check for isKeyUp events LastKeyboardState = CurrentKeyboardState; LastGamePadState = CurrentGamePadState; } /// <summary> /// Checks for a "menu select" input action. /// The controllingPlayer parameter specifies which player to read input for. /// If this is null, it will accept input from any player. When the action /// is detected, the output playerIndex reports which player pressed it. /// </summary> public bool IsMenuSelect(PlayerIndex? controllingPlayer, out PlayerIndex playerIndex) { Console.WriteLine("Pressing A"); return IsNewKeyPress(Keys.Space, controllingPlayer, out playerIndex) || IsNewKeyPress(Keys.Enter, controllingPlayer, out playerIndex) || IsNewButtonPress(Buttons.A, controllingPlayer, out playerIndex) || IsNewButtonPress(Buttons.Start, controllingPlayer, out playerIndex); } /// <summary> /// Checks for a "menu cancel" input action. /// The controllingPlayer parameter specifies which player to read input for. /// If this is null, it will accept input from any player. When the action /// is detected, the output playerIndex reports which player pressed it. /// </summary> public bool IsMenuCancel(PlayerIndex? controllingPlayer, out PlayerIndex playerIndex) { return IsNewKeyPress(Keys.Escape, controllingPlayer, out playerIndex) || IsNewButtonPress(Buttons.B, controllingPlayer, out playerIndex) || IsNewButtonPress(Buttons.Back, controllingPlayer, out playerIndex); }

    Read the article

  • Ubuntu 12.04 Hp G72 Problem Installing proprietary wireless driver

    - by user69402
    I have a fresh Ubuntu 12.04 installed on HP G72 machine. In order for my wireless to work I need the proprietary driver installed - Broadcom STA wireless driver. Trying to install it from the System Settings gives me the error: "Sorry, installation of this driver failed. Please have a look at the log file for details: /var/log/jockey.log". So far I suspect the error to be caused by the bad "bcmwl-kernel-source" installation. What i tried: 1. remove "bcmwl-kernel-source" 2. install "bcmwl-kernel-source" installation through the terminal ends with "error code (1)". I would greatly appreciate any help Here is everything that the terminal returns: Reading package lists... Done Building dependency tree Reading state information... Done The following NEW packages will be installed: bcmwl-kernel-source 0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. Need to get 0 B/1,151 kB of archives. After this operation, 3,514 kB of additional disk space will be used. Selecting previously unselected package bcmwl-kernel-source. (Reading database ... 170331 files and directories currently installed.) Unpacking bcmwl-kernel-source (from .../bcmwl-kernel-source_5.100.82.38+bdcom-0ubuntu6.1_amd64.deb) ... Setting up bcmwl-kernel-source (5.100.82.38+bdcom-0ubuntu6.1) ... Loading new bcmwl-5.100.82.38+bdcom DKMS files... /usr/sbin/dkms: line 467: unset: POST_REMOVE$PRE_BUMLD': not a valid identifier /usr/sbin/dkms: line 467: unset:BUILD_E\CLUWIVE_ARCH': not a valid identifier /usr/sbin/dkms: line 467: unset: $': not a valid identifier /usr/sbin/dkms: line 467: unset:$': not a valid identifier /usr/sbin/dkms: line 467: unset: modules_conf_arra}': not a valid identifier /usr/sbin/dkms: line 467: unset:$': not a valid identifier /usr/sbin/dkms: line 467: unset: $': not a valid identifier /usr/sbin/dkms: line 467: unset:$': not a valid identifier /usr/sbin/dkms: line 467: unset: $': not a valid identifier /usr/sbin/dkms: line 467: unset:$': not a valid identifier /usr/sbin/dkms: line 467: unset: `$': not a valid identifier /usr/sbin/dkms: line 419: ${!POST_REMOVE$PRE_BUMLD[@]}: bad substitution /usr/sbin/dkms: line 419: ${!BUILD_E\CLUWIVE_ARCH[@]}: bad substitution /usr/sbin/dkms: line 419: ${!$[@]}: bad substitution /usr/sbin/dkms: line 419: ${!$[@]}: bad substitution /usr/sbin/dkms: line 419: ${!$[@]}: bad substitution /usr/sbin/dkms: line 419: ${!$[@]}: bad substitution /usr/sbin/dkms: line 419: ${!$[@]}: bad substitution /usr/sbin/dkms: line 419: ${!$[@]}: bad substitution /usr/sbin/dkms: line 419: ${!$[@]}: bad substitution /usr/sbin/dkms: line 419: ${!$[@]}: bad substitution malloc: ../bash/subst.c:3671: assertion botched free: start and end chunk sizes differ Aborting.../tmp/tmp.pEXTnftUfI: line 4: modules_conf_arra}[[@]}]=[[@]}]}: command not found dkms.conf: Error! No 'DEST_MODULE_LOCATION' directive specified for record #0. dkms.conf: Error! Directive 'DEST_MODULE_LOCATION' does not begin with '/kernel', '/updates', or '/extra' in record #0. dkms.conf: Error! No 'PACKAGE_VERSION' directive specified. Error! Bad conf file. File: /usr/src/bcmwl-5.100.82.38+bdcom/dkms.conf does not represent a valid dkms.conf file. dpkg: error processing bcmwl-kernel-source (--configure): subprocess installed post-installation script returned error exit status 8 Errors were encountered while processing: bcmwl-kernel-source E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • Gnome Install Error (1)

    - by Guy1984
    I'm trying to install Gnome on my Ubuntu 12.04 P.Pangolin and getting the following errors: root@***:~# sudo apt-get install gnome-core gnome-session-fallback Reading package lists... Done Building dependency tree Reading state information... Done gnome-core is already the newest version. gnome-session-fallback is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 4 not upgraded. 5 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? y Setting up bluez (4.98-2ubuntu7) ... start: Job failed to start invoke-rc.d: initscript bluetooth, action "start" failed. dpkg: error processing bluez (--configure): subprocess installed post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of gnome-bluetooth: gnome-bluetooth depends on bluez (>= 4.36); however: Package bluez is not configured yet. dpkg: error processing gnome-bluetooth (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of gnome-shell: gnome-shell depends on gnome-bluetooth (>= 3.0.0); however: Package gnome-bluetooth is not configured yet. dpkg: error processing gnome-shell (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of gnome-user-share: gnome-user-share depends on gnome-bluetooth; however: Package gnome-bluetooth is not configured yet. dpkg: error processing gnome-user-share (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of gnome-core: gnome-core depends on gNo apport report written because the error message indicates its a followup error from a previous failure. No apport report written because the error message indicates its a followup error from a previous failure. No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already nome-bluetooth (>= 3.0); however: Package gnome-bluetooth is not configured yet. gnome-core depends on gnome-shell (>= 3.0); however: Package gnome-shell is not configured yet. gnome-core depends on gnome-user-share (>= 3.0); however: Package gnome-user-share is not configured yet. dpkg: error processing gnome-core (--configure): dependency problems - leaving unconfigured Errors were encountered while processing: bluez gnome-bluetooth gnome-shell gnome-user-share gnome-core E: Sub-process /usr/bin/dpkg returned an error code (1) Syslog: Oct 5 16:04:17 ks34900 bluetoothd[5176]: Bluetooth daemon 4.98 Oct 5 16:04:17 ks34900 bluetoothd[5176]: Starting SDP server Oct 5 16:04:17 ks34900 bluetoothd[5176]: opening L2CAP socket: Address family not supported by protocol Oct 5 16:04:17 ks34900 bluetoothd[5176]: Server initialization failed Oct 5 16:04:17 ks34900 bluetoothd[5176]: Failed to init alert plugin Oct 5 16:04:17 ks34900 bluetoothd[5176]: Failed to init time plugin Oct 5 16:04:17 ks34900 bluetoothd[5176]: Failed to init proximity plugin Oct 5 16:04:17 ks34900 bluetoothd[5176]: Failed to open control socket: Address family not supported by protocol (97) Oct 5 16:04:17 ks34900 bluetoothd[5176]: Can't init bnep module Oct 5 16:04:17 ks34900 bluetoothd[5176]: Failed to init network plugin Oct 5 16:04:17 ks34900 bluetoothd[5176]: Unable to start SCO server socket Oct 5 16:04:17 ks34900 bluetoothd[5176]: Failed to init audio plugin Oct 5 16:04:17 ks34900 bluetoothd[5176]: Failed to init gatt_example plugin Oct 5 16:04:17 ks34900 bluetoothd[5176]: Can't open HCI socket: Address family not supported by protocol (97) Oct 5 16:04:17 ks34900 bluetoothd[5176]: adapter_ops_setup failed Oct 5 16:04:17 ks34900 kernel: init: bluetooth main process (5176) terminated with status 1 Oct 5 16:04:17 ks34900 kernel: init: bluetooth main process ended, respawning Oct 5 16:04:17 ks34900 bluez: Stopping uarts Oct 5 16:04:17 ks34900 bluez: Stopping rfcomm Any thoughts?

    Read the article

  • Auto-organized / smart inventory system?

    - by VeXe
    for the past week I've been working on an inventory system with Unity3D. At first I got help from the guys at Design3 but it wasn't too long till we split path, because I really didn't like the way they did their code, it didn't have any smell of OOP whatsoever. I took it further steps ahead - items take more than one slot, advanced placement system (items tries their best to find the best close fit), local mouse system (mouse gets trapped in active bag area), etc. Here's a demo of my work. What we would like to have in our game, is an auto-organizing feature - not auto-sort. We want this feature because our inventory's going to be in 'real-time' - not like in Resident Evil 1,2,3 etc where you would pause the game and do things in your inventory. Now imagine your self in a sticky situation surrounded by zombies, and you don't have bullets, you look around, you see that there are bullets nearby on the ground, so you go for them and try to pick them up, but they don't fit! you look at your inventory and find out that if you reorganize some of the items, it will fit! - now the player - in that situation doesn't have time to reorganize because he's surrounded with zombies and will die if he stops and organizes the inventory to make space (remember inventory in real-time, no pausing) - wouldn't it be nice for that to happen automatically? - Yes! (I believe this has been implemented in some games like Dungeon siege or something, so sure it's doable) take a look at this picture for example: Yes, so if you auto-sort the issue you will get your spaces but it's bad because: 1- Expensive: it doesn't need a whole sort operation to free those spaces, in the first picture, just slide the red item at the bottom to the very left, and you get the same spaces that you got from the auto-sort. 2- It's annoying to the player: "Who the F told you to re-order my stuff?" I'm not asking for "How to write the code" for this, I'm just asking for some guidance, where to look, what algorithms are involved? Is this something related to graphs and shortest path stuff? I hope not cuz I didn't manage to continue my college studies :/ But even if it is, just tell me and I will learn the stuff related. Notice there could be more than just one solution. So I guess the first thing I have to do is figure out if the situation is 'solvable' - if I know how to determine if a situation is solvable or not, then I can 'solve' it. I just need to know the conditions that makes it 'solvable'. And I believe there must be some algorithm/data structure for this. Here's a pic for more than one solution of trying to fit a 1x3 item: The arrows show just one of the solutions, but if you look you will find more than one. This is what I ultimately not auto-sorting but find a solution and applying it. Note that if I spend time on it I will come up with a way to solve it, but it wouldn't be the best way, it's like, holding a car wheel with your feet instead of your hands! XD Or just like trying to solve an issue that requires arrays, but you're not yet aware of their existence! So what is the right approach to this? Hope somebody helps, thanks a lot in advance :)

    Read the article

  • Criminals and Other Illegal Characters

    - by Most Valuable Yak (Rob Volk)
    SQLTeam's favorite Slovenian blogger Mladen (b | t) had an interesting question on Twitter: http://www.twitter.com/MladenPrajdic/status/347057950470307841 I liked Kendal Van Dyke's (b | t) reply: http://twitter.com/SQLDBA/status/347058908801667072 And he was right!  This is one of those pretty-useless-but-sounds-interesting propositions that I've based all my presentations on, and most of my blog posts. If you read all the replies you'll see a lot of good suggestions.  I particularly like Aaron Bertrand's (b | t) idea of going into the Unicode character set, since there are over 65,000 characters available.  But how to find an illegal character?  Detective work? I'm working on the premise that if SQL Server will reject it as a name it would throw an error.  So all we have to do is generate all Unicode characters, rename a database with that character, and catch any errors. It turns out that dynamic SQL can lend a hand here: IF DB_ID(N'a') IS NULL CREATE DATABASE [a]; DECLARE @c INT=1, @sql NVARCHAR(MAX)=N'', @err NVARCHAR(MAX)=N''; WHILE @c<65536 BEGIN BEGIN TRY SET @sql=N'alter database ' + QUOTENAME(CASE WHEN @c=1 THEN N'a' ELSE NCHAR(@c-1) END) + N' modify name=' + QUOTENAME(NCHAR(@c)); RAISERROR(N'*** Trying %d',10,1,@c) WITH NOWAIT; EXEC(@sql); SET @c+=1; END TRY BEGIN CATCH SET @err=ERROR_MESSAGE(); RAISERROR(N'Ooops - %d - %s',10,1,@c,@err) WITH NOWAIT; BREAK; END CATCH END SET @sql=N'alter database ' + QUOTENAME(NCHAR(@c-1)) + N' modify name=[a]'; EXEC(@sql); The script creates a dummy database "a" if it doesn't already exist, and only tests single characters as a database name.  If you have databases with single character names then you shouldn't run this on that server. It takes a few minutes to run, but if you do you'll see that no errors are thrown for any of the characters.  It seems that SQL Server will accept any character, no matter where they're from.  (Well, there's one, but I won't tell you which. Actually there's 2, but one of them requires some deep existential thinking.) The output is also interesting, as quite a few codes do some weird things there.  I'm pretty sure it's due to the font used in SSMS for the messages output window, not all characters are available.  If you run it using the SQLCMD utility, and use the -o switch to output to a file, and -u for Unicode output, you can open the file in Notepad or another text editor and see the whole thing. I'm not sure what character I'd recommend to answer Mladen's question.  I think the standard tab (ASCII 9) is fine.  There's also several specific separator characters in the original ASCII character set (decimal 28-31). But of all the choices available in Unicode whitespace, I think my favorite would be the Mongolian Vowel Separator.  Or maybe the zero-width space. (that'll be fun to print!)  And since this is Mladen we're talking about, here's a good selection of "intriguing" characters he could use.

    Read the article

  • How to place rooms proceduraly (rule based) on in a game word

    - by gardian06
    I am trying to design the algorithm for my level generation which is a rule driven system. I have created all the rules for the system. I have taken care to insure that all rooms make sense in a grid type setup. for example: these rooms could make this configuration The logic flow code that I have so far Door{ Vector3 position; POD orient; // 5 possible values (up is not an option) bool Open; } Room{ String roomRule; Vector3 roomPos; Vector3 dimensions; POD roomOrient; // 4 possible values List doors<Door>; } LevelManager{ float scale = 18f; List usedRooms<Room>; List openDoors<Door> bool Grid[][][]; Room CreateRoom(String rule, Vector3 position, POD Orient){ place recieved values based on rule fill in other data } Vector3 getDimenstions(String rule){ return dimensions of the room } RotateRoom(POD rotateAmount){ rotate all items in the room } MoveRoom(Room toBeMoved, POD orientataion, float distance){ move the position of the room based on inputs } GenerateMap(Vector3 size, Vector3 start, Vector3 end){ Grid = array[size.y][size.x][size.z]; Room floatingRoom; floatingRoom = Room.CreateRoom(S01, start, rand(4)); usedRooms.Add(floatingRoom); for each Door in floatingRoom.doors{ openDoors.Add(door); } // fill used grid spaces floatingRoom = Room.CreateRoom(S02, end, rand(4); usedRooms.Add(floatingRoom); for each Door in floatingRoom.doors{ openDoors.Add(door); } Vector3 nRoomLocation; Door workingDoor; string workingRoom; // fill used grid spaces // pick random door on the openDoors list workingDoor = /*randomDoor*/ // get a random rule nRoomLocation = workingDoor.position; // then I'm lost } } I know that I have to make sure for convergence (namely the end is reachable), and to do this until there are no more doors on the openDoors list. right now I am simply trying to get this to work in 2D (there are rules that introduce 3D), but I am working on a presumption that a rigorous algorithm can be trivially extended to 3D. EDIT: my thought pattern so far is to take an existing open door and then pick a random room (restrictions can be put in later) place that room's center at the doors location move the room in the direction of the doors orientation half the rooms dimension w/respect to that axis then test against the 3D array to see if all the grid points are open, or have been used, or if there is even space to put the room (caseEdge) if caseEdge (which can also occur in between rooms) then put the door on a toBeClosed list, and remove it from the open list (placing a wall or something there). then to do some kind of test that both the start, and the goal are connected, and reachable from each other (each room has nodes for AI, but I don't want to "have" to pull those out to accomplish this). but this logic has the problem for say the U, or L shaped rooms in my example, and then I also have a problem conceptually if the room needs to be rotated.

    Read the article

  • Software Center seems to freeze system when installing, syslog has "blocked for more than 120 seconds" errors

    - by nbm
    12.04 (precise) 64-bit Kernel Linux 3.2.0-39 3.6GB memory Intel Core 2 Duo CPU @ 2.40GHz x2 WUBI-installed Ubuntu running on a MacBook Pro 7.1 with OSX running Vista via Boot Camp (hey, I like lots of OS's m'kay?) When installing from Ubuntu software center my system very frequently freezes. This has happened 4 of the last 5 installs. Most recently I was installing the Google Earth .deb from Google's website: clicking the .deb file automatically opens Software Center (otherwise I would have used Synaptic, as I've grown to expect Software Center to freeze my system and I'm rather tired of it.) By "freeze" I mean nothing works: no dash, no launcher, no mouse movement, no alt-tab, can't open terminal (keyboard does not work). Software center does show the "installing" icon but after that it greys out and I can't click anything. REISUB has no effect but a cold power-down and restart is possible. Occasionally, after 5-10 minutes, I'll be able to move the mouse / use the keyboard and run a launcher command or two, although other open apps (Chrome and Software Center) will still be greyed-out/frozen. (I've never waited longer than that - if still unresponsive after 15 minutes I just power down and restart.) Most recently, which is why I am finally posting a question, I waited about 15 minutes and was finally able to open System Monitor while this was going on. Processes tells me that System Monitor is using about 20% of CPU, and nothing else is using much (zeros mostly). In fact I didn't even see Software Center listed? However at this point the system finally partially unfroze, the installation completed, and while I wasn't about to close Software Center I was able to do a system shutdown and fresh restart and I went and took a look at the syslog. In /var/log/syslog I see a lot of ":blocked for more than 120 seconds" messages. Similar to ubuntu hang out with this message :blocked for more than 120 seconds Which has not been answered, and I'm not running a virtual machine. My full syslog with stack traces looks very, very similar to this: Why do tasks on Amazon Xen instance block for over 120 seconds causing server to hang? Note that that question was solved, but that's because the problem was being caused by Amazon and Amazon fixed the bug. I'm not running anything Amazon-related. My syslog does look very similar, however. My question is also similar to this: Troubleshooting server hang But the referenced "duplicate" in that question is about how to kill processes/restart when the system freezes. I know how to kill processes and restart. I want to figure out what is causing the problem so I can try to fix it. I realize that I could just use Synaptic instead of Ubuntu Software Center, but I'd like to try to solve the problem if possible. I'm thinking I should perhaps submit a bug report, but I wanted to first see if anyone else was having any similar problems, and if so what you all did to fix it. I see a number of questions about Software Center freezing and others, including those I linked, about the "blocked for more than 120 seconds" log error, but I didn't see any question that links the two. I did save a copy of the syslog report if anyone wants to see it, but as mentioned it's quite similar to the one posted in the Amazon-related question...and I didn't want to take up even more space unnecessarily as, my apologies - this question has already become extremely verbose!

    Read the article

< Previous Page | 357 358 359 360 361 362 363 364 365 366 367 368  | Next Page >