Search Results

Search found 20313 results on 813 pages for 'batch size'.

Page 496/813 | < Previous Page | 492 493 494 495 496 497 498 499 500 501 502 503  | Next Page >

  • Unable to remove file using 'rm'

    - by Alvin Sim
    Hi all, In one of our severs (IBM AIX), we have a file in path /data/1002/ which we were not able to remove or delete using the 'rm' command. The error message we got is "rm: A1208001.002: A file or directory in the path name does not exist." With the "-f" option, no error message was displayed, but the file is still there. This file has a '0' byte size and when i use the command "touch A120001.002", i see two files with the same file name in that directory. The directory listing is as below: $ ls -l total 56 -rwxrwxrwx 1 oracle dba 0 Feb 09 11:57 S1208001.002 drwxrwxrwx 4 nobody dba 24576 Feb 09 13:36 backup How do I remove this bogus fie? Thanks.

    Read the article

  • nginx tmp file folder runing out of diskspace

    - by user1179459
    I get mysql diskspace error Can't create/write to file '/tmp/#sql_777_0.MYI' (Errcode: 28) mainly because my ngnix server is writing file into the tmp folder which doesn't get clean up.. i added this command as per instructions on the nginx manual to the crontab but doesn't seems to be doing the trick, (i don't understand what it does too) 0 */1 * * * /usr/sbin/tmpwatch -am 1 /tmp/nginx_client then i had to do this commands mannually cd /tmp/nginx_client find -name * | xargs rm i need to know what should i do to automate this clean up ? is there way to increase the /tmp/ - /var/tmp/ size without reformatting or doing any dangerous things ? Can i change the location of the MYSQL - TMP files ?

    Read the article

  • Why can't I resize some virtual thick disks in VMWhare vSphere Client

    - by Paul-Jan
    Let me start off by mentioning I am really really new to this whole VMWare ESX/vSphere thing, so apologies if this is a FAQ or asked a million times before using better wording. Also, please correct me if I'm using the wrong terms all over the place. We just got this brand new VMWare EXS server, and we started importing virtual machines from around the office. Some of them were originally based on VMWare, others are converted from MS Virtual Server. One of these machines needs a bigger disk, but in the vSphere Client the disk size spinedit is disabled. Disk type is thick. However, for other machines, with the same disk type, the spinedit is enabled and I can happily resize their disks. So here is the question: why can't I resize this disk in particular? All machines are freshly converted, so I assume there are no snapshots yet, which I read would stop a resize from happening (again, correct me if I'm wrong).

    Read the article

  • TTY Resolution in Xubuntu 9.10

    - by Zurahn
    I've exhausted my ability to search through Google for this, so I'm giving it a go here. What I'm trying to do is increase the resolution (or decrease the font size) in the TTY terminals. Xubuntu 9.10 uses GRUB2, and everywhere I can find directs me to edit the /etc/default/grub File in order to add vga=XXX to the GRUB_CMDLINE_LINUX value, and this simply doesn't work. Out of endless fiddling with the file, nothing ever seems to change. On my Netbook running an earlier version, I had success with this command dpkg-reconfigure console-setup But once again it yields no change. Got any ideas?

    Read the article

  • Safari Mobile Multi-Line <Select> aka GWT Multi-Line ListBox

    - by McTrafik
    Hi guys. Working on a webapp here that must run on the iPad (so, Safari Mobile). I have this code that works fine in just about anything except iPad: <select class="gwt-ListBox" size="12" multiple="multiple"> <option value="Bleeding Eyelashes">Bleeding Eyelashes</option> <option value="Smelly Pupils">Smelly Pupils</option> <option value="Bushy Eyebrows">Bushy Eyebrows</option> <option value="Green Vessels">Green Vessels</option> <option value="Sucky Noses">Sucky Noses</option> </select> What it's supposed to look like is a box with 12 lines ans 5 of them filled up. It works fine in FF, IE, Chrome, Safari Win. But, when I open it on iPad, it's just a single line! Styling it with CSS doesn't work. It just makes the single line bigger if I set the height. Is there a way to make it behave the same way as in normal browsers, or do I nave to make a custom component? Thanks.

    Read the article

  • Laptop with Intel Graphics: external VGA monitor only gets signal on boot (no "hot plugging")

    - by iveand
    I am able to get an external VGA monitor (or projector) to work if I start my laptop with it connected. However, if I start the laptop with it disconnected there is no signal on the external. The Displays screen shows the external, and thinks that it is active, but there is no signal being sent to it. This has been a persistent problem since 10.04 (I am now on 12.04.... each upgrade hoping something is improved). I should note that even when it works (starting with display connected), Displays still says the monitor is "unknown" (but it sends the signal). For the correct resolution to display, I have had to add a few xrandr lines for my monitor to my .xprofile file... otherwise resolution is limited to default 1024x768. So, resolution issues can be worked around, but the main issue is that the external doesn't get a signal without starting the machine with it connected. I have tried: adding i915.modeset=1 to grub (also i965.modeset=1 since someone posted that this helped even though lshw shows i915) adding following ppa and doing a dist-upgrade: sudo add-apt-repository ppa:xorg-edgers/ppa Here are the details: Laptop: Toshiba Tecra M10 lspci listings for video: 00:02.0 VGA compatible controller [0300]: Intel Corporation Mobile 4 Series Chipset Integrated Graphics Controller [8086:2a42] (rev 07) sudo lshw -C video listing: *-display:0 description: VGA compatible controller product: Mobile 4 Series Chipset Integrated Graphics Controller vendor: Intel Corporation physical id: 2 bus info: pci@0000:00:02.0 version: 07 width: 64 bits clock: 33MHz capabilities: msi pm vga_controller bus_master cap_list rom configuration: driver=i915 latency=0 resources: irq:46 memory:ff400000-ff7fffff memory:e0000000-efffffff ioport:cff8(size=8) *-display:1 UNCLAIMED description: Display controller product: Mobile 4 Series Chipset Integrated Graphics Controller vendor: Intel Corporation physical id: 2.1 bus info: pci@0000:00:02.1 version: 07 width: 64 bits clock: 33MHz capabilities: pm bus_master cap_list configuration: latency=0 resources: memory:ffc00000-ffcfffff "System Info" shows my graphics as the following Mobile Intel® GM45 Express Chipset x86/MMX/SSE2

    Read the article

  • system overrides my mount parameters in /etc/fstab

    - by valya
    [.../~]$ mount /dev/sda4 on / type ext4 (rw,commit=60,commit=0) [.../~]$ cat /etc/fstab # UNCONFIGURED FSTAB FOR BASE SYSTEM UUID=70739c04-fcb6-4747-803c-824f9c894f41 / ext4 defaults,commit=60 0 1 What can I do about it? It seems strange. I want to be able to set any commit time I want Edit: added /proc/mounts contents [.../~]$ cat /proc/mounts rootfs / rootfs rw 0 0 none /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0 none /proc proc rw,nosuid,nodev,noexec,relatime 0 0 none /dev devtmpfs rw,relatime,size=886332k,nr_inodes=221583,mode=755 0 0 none /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 0 0 fusectl /sys/fs/fuse/connections fusectl rw,relatime 0 0 /dev/disk/by-uuid/70739c04-fcb6-4747-803c-824f9c894f41 / ext4 rw,relatime,barrier=1,data=ordered 0 0 none /sys/kernel/debug debugfs rw,relatime 0 0 none /sys/kernel/security securityfs rw,relatime 0 0 none /dev/shm tmpfs rw,nosuid,nodev,relatime 0 0 none /var/run tmpfs rw,nosuid,relatime,mode=755 0 0 none /var/lock tmpfs rw,nosuid,nodev,noexec,relatime 0 0 /dev/sda3 /media/megahard fuseblk rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other,blksize=4096 0 0 cgroup /dev/cgroup/cpu cgroup rw,relatime,cpu,release_agent=/usr/local/sbin/cgroup_clean 0 0 gvfs-fuse-daemon /home/va1en0k/.gvfs fuse.gvfs-fuse-daemon rw,nosuid,nodev,relatime,user_id=1000,group_id=1000 0 0

    Read the article

  • Samba: session setup failed: NT_STATUS_LOGON_FAILURE

    - by stivlo
    I tried to set up Samba with "unix password sync", but I still get logon failure. I am running Ubuntu Natty Narwhal. $ smbclient -L localhost Enter stivlo's password: session setup failed: NT_STATUS_LOGON_FAILURE Here is my /etc/samba/smb.conf [global] workgroup = obliquid server string = %h server (Samba, Ubuntu) dns proxy = no log file = /var/log/samba/log.%m max log size = 1000 syslog = 0 panic action = /usr/share/samba/panic-action %d security = user encrypt passwords = true passdb backend = tdbsam obey pam restrictions = yes unix password sync = yes passwd program = /usr/bin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . pam password change = yes map to guest = bad user [www] path = /var/www browsable = yes read only = no create mask = 0755 After modifying I restarted the servers: $ sudo restart smbd $ sudo restart nmbd However I still can't logon with my Unix username and password. Can anyone please help? Thank you in advance!

    Read the article

  • Need some advice regarding collision detection with the sprite changing its width and height

    - by Frank Scott
    So I'm messing around with collision detection in my tile-based game and everything works fine and dandy using this method. However, now I am trying to implement sprite sheets so my character can have a walking and jumping animation. For one, I'd like to to be able to have each frame of variable size, I think. I want collision detection to be accurate and during a jumping animation the sprite's height will be shorter (because of the calves meeting the hamstrings). Again, this also works fine at the moment. I can get the character to animate properly each frame and cycle through animations. The problems arise when the width and height of the character change. Often times its position will be corrected by the collision detection system and the character will be rubber-banded to random parts of the map or even go outside the map bounds. For some reason with the linked collision detection algorithm, when the width or height of the sprite is changed on the fly, the entire algorithm breaks down. The solution I found so far is to have a single width and height of the sprite that remains constant, and only adjust the source rectangle for drawing. However, I'm not sure exactly what to set as the sprite's constant bounding box because it varies so much with the different animations. So now I'm not sure what to do. I'm toying with the idea of pixel-perfect collision detection but I'm not sure if it would really be worth it. Does anyone know how Braid does their collision detection? My game is also a 2D sidescroller and I was quite impressed with how it was handled in that game. Thanks for reading.

    Read the article

  • Umbraco Gold Partner going from strength to strength.

    - by Vizioz Limited
    It is amazing how time flies when you are having fun and we certainly have been having an interesting year at Vizioz, I thought it was about time I wrote another blog post and shared some of our news.Over the last 6 months we have:a) Had the pleasure of working with some great clients from the USA, Ireland and the UKb) Built some interesting and complex sites for Multi-national brands (under NDA's, you'd be impressed if you knew!)c) Converted the Umbraco Users Manual to a free iBook for all those iPad owning Umbraco users.d) We have hired three new employees (Sam, Pearl & Zaara)e) We have given our notice on our current office (see below)And on the horizon:a) We have submitted Allen for Umbraco to the Apple App store for approval (hopefully this will be available very soon!)b) We are about to sign a new office lease for a new office that is twice the size of our current office, so we will have room for a meeting room, a chill out room and some more employees!So it's exciting times at Vizioz, thank you to all our fantastic clients for making this possible, we look forward to working with you all over the years to come.One thing we don't shout about as much as we probably should, we also renewed our Umbraco Gold Partner status for the second year, showing our commitment to the Umbraco CMS, if you are looking for a great Umbraco partner with experienced developers to build your new site, or to take over the on-going maintenance of an existing site, then pick up the phone and give us a call, we would love to add you to our list of happy customers!

    Read the article

  • Supplementary Developer Laptop

    - by David Smith
    I'm looking to buy a laptop with the following specs for a developer. The goal will be to have a development machine supplementing the devs desktop. During work hours the dev will be on a beefy desktop. For working while on the go: trains, client sites, code camps, it would be nice to have a machine which can run Visual Studio 2008 without needing to remote desktop into their primary machine. What do you think is the lowest cost laptop meeting this need? Here are the specs I have in mind: SSD drive 64GB-doesn't need to be huge, most data is stored on servers. Will need to fit Windows 7, IIS, SQL Server, and Visual Studio 2010. RAM-3GB processor =Pentium Core 2 duo Screen size = 14 inches. OS doesn't matter. It will be paved with Windows 7 Ultimate optical drive omitted would be a plus. weight and battery life aren't so important because the machine will be plugged in almost all the time.

    Read the article

  • Recover Data Like a Forensics Expert Using an Ubuntu Live CD

    - by Trevor Bekolay
    There are lots of utilities to recover deleted files, but what if you can’t boot up your computer, or the whole drive has been formatted? We’ll show you some tools that will dig deep and recover the most elusive deleted files, or even whole hard drive partitions. We’ve shown you simple ways to recover accidentally deleted files, even a simple method that can be done from an Ubuntu Live CD, but for hard disks that have been heavily corrupted, those methods aren’t going to cut it. In this article, we’ll examine four tools that can recover data from the most messed up hard drives, regardless of whether they were formatted for a Windows, Linux, or Mac computer, or even if the partition table is wiped out entirely. Note: These tools cannot recover data that has been overwritten on a hard disk. Whether a deleted file has been overwritten depends on many factors – the quicker you realize that you want to recover a file, the more likely you will be able to do so. Our setup To show these tools, we’ve set up a small 1 GB hard drive, with half of the space partitioned as ext2, a file system used in Linux, and half the space partitioned as FAT32, a file system used in older Windows systems. We stored ten random pictures on each hard drive. We then wiped the partition table from the hard drive by deleting the partitions in GParted. Is our data lost forever? Installing the tools All of the tools we’re going to use are in Ubuntu’s universe repository. To enable the repository, open Synaptic Package Manager by clicking on System in the top-left, then Administration > Synaptic Package Manager. Click on Settings > Repositories and add a check in the box labelled “Community-maintained Open Source software (universe)”. Click Close, and then in the main Synaptic Package Manager window, click the Reload button. Once the package list has reloaded, and the search index rebuilt, search for and mark for installation one or all of the following packages: testdisk, foremost, and scalpel. Testdisk includes TestDisk, which can recover lost partitions and repair boot sectors, and PhotoRec, which can recover many different types of files from tons of different file systems. Foremost, originally developed by the US Air Force Office of Special Investigations, recovers files based on their headers and other internal structures. Foremost operates on hard drives or drive image files generated by various tools. Finally, scalpel performs the same functions as foremost, but is focused on enhanced performance and lower memory usage. Scalpel may run better if you have an older machine with less RAM. Recover hard drive partitions If you can’t mount your hard drive, then its partition table might be corrupted. Before you start trying to recover your important files, it may be possible to recover one or more partitions on your drive, recovering all of your files with one step. Testdisk is the tool for the job. Start it by opening a terminal (Applications > Accessories > Terminal) and typing in: sudo testdisk If you’d like, you can create a log file, though it won’t affect how much data you recover. Once you make your choice, you’re greeted with a list of the storage media on your machine. You should be able to identify the hard drive you want to recover partitions from by its size and label. TestDisk asks you select the type of partition table to search for. In most cases (ext2/3, NTFS, FAT32, etc.) you should select Intel and press Enter. Highlight Analyse and press enter. In our case, our small hard drive has previously been formatted as NTFS. Amazingly, TestDisk finds this partition, though it is unable to recover it. It also finds the two partitions we just deleted. We are able to change their attributes, or add more partitions, but we’ll just recover them by pressing Enter. If TestDisk hasn’t found all of your partitions, you can try doing a deeper search by selecting that option with the left and right arrow keys. We only had these two partitions, so we’ll recover them by selecting Write and pressing Enter. Testdisk informs us that we will have to reboot. Note: If your Ubuntu Live CD is not persistent, then when you reboot you will have to reinstall any tools that you installed earlier. After restarting, both of our partitions are back to their original states, pictures and all. Recover files of certain types For the following examples, we deleted the 10 pictures from both partitions and then reformatted them. PhotoRec Of the three tools we’ll show, PhotoRec is the most user-friendly, despite being a console-based utility. To start recovering files, open a terminal (Applications > Accessories > Terminal) and type in: sudo photorec To begin, you are asked to select a storage device to search. You should be able to identify the right device by its size and label. Select the right device, and then hit Enter. PhotoRec asks you select the type of partition to search. In most cases (ext2/3, NTFS, FAT, etc.) you should select Intel and press Enter. You are given a list of the partitions on your selected hard drive. If you want to recover all of the files on a partition, then select Search and hit enter. However, this process can be very slow, and in our case we only want to search for pictures files, so instead we use the right arrow key to select File Opt and press Enter. PhotoRec can recover many different types of files, and deselecting each one would take a long time. Instead, we press “s” to clear all of the selections, and then find the appropriate file types – jpg, gif, and png – and select them by pressing the right arrow key. Once we’ve selected these three, we press “b” to save these selections. Press enter to return to the list of hard drive partitions. We want to search both of our partitions, so we highlight “No partition” and “Search” and then press Enter. PhotoRec prompts for a location to store the recovered files. If you have a different healthy hard drive, then we recommend storing the recovered files there. Since we’re not recovering very much, we’ll store it on the Ubuntu Live CD’s desktop. Note: Do not recover files to the hard drive you’re recovering from. PhotoRec is able to recover the 20 pictures from the partitions on our hard drive! A quick look in the recup_dir.1 directory that it creates confirms that PhotoRec has recovered all of our pictures, save for the file names. Foremost Foremost is a command-line program with no interactive interface like PhotoRec, but offers a number of command-line options to get as much data out of your had drive as possible. For a full list of options that can be tweaked via the command line, open up a terminal (Applications > Accessories > Terminal) and type in: foremost –h In our case, the command line options that we are going to use are: -t, a comma-separated list of types of files to search for. In our case, this is “jpeg,png,gif”. -v, enabling verbose-mode, giving us more information about what foremost is doing. -o, the output folder to store recovered files in. In our case, we created a directory called “foremost” on the desktop. -i, the input that will be searched for files. This can be a disk image in several different formats; however, we will use a hard disk, /dev/sda. Our foremost invocation is: sudo foremost –t jpeg,png,gif –o foremost –v –i /dev/sda Your invocation will differ depending on what you’re searching for and where you’re searching for it. Foremost is able to recover 17 of the 20 files stored on the hard drive. Looking at the files, we can confirm that these files were recovered relatively well, though we can see some errors in the thumbnail for 00622449.jpg. Part of this may be due to the ext2 filesystem. Foremost recommends using the –d command-line option for Linux file systems like ext2. We’ll run foremost again, adding the –d command-line option to our foremost invocation: sudo foremost –t jpeg,png,gif –d –o foremost –v –i /dev/sda This time, foremost is able to recover all 20 images! A final look at the pictures reveals that the pictures were recovered with no problems. Scalpel Scalpel is another powerful program that, like Foremost, is heavily configurable. Unlike Foremost, Scalpel requires you to edit a configuration file before attempting any data recovery. Any text editor will do, but we’ll use gedit to change the configuration file. In a terminal window (Applications > Accessories > Terminal), type in: sudo gedit /etc/scalpel/scalpel.conf scalpel.conf contains information about a number of different file types. Scroll through this file and uncomment lines that start with a file type that you want to recover (i.e. remove the “#” character at the start of those lines). Save the file and close it. Return to the terminal window. Scalpel also has a ton of command-line options that can help you search quickly and effectively; however, we’ll just define the input device (/dev/sda) and the output folder (a folder called “scalpel” that we created on the desktop). Our invocation is: sudo scalpel /dev/sda –o scalpel Scalpel is able to recover 18 of our 20 files. A quick look at the files scalpel recovered reveals that most of our files were recovered successfully, though there were some problems (e.g. 00000012.jpg). Conclusion In our quick toy example, TestDisk was able to recover two deleted partitions, and PhotoRec and Foremost were able to recover all 20 deleted images. Scalpel recovered most of the files, but it’s very likely that playing with the command-line options for scalpel would have enabled us to recover all 20 images. These tools are lifesavers when something goes wrong with your hard drive. If your data is on the hard drive somewhere, then one of these tools will track it down! Similar Articles Productive Geek Tips Recover Deleted Files on an NTFS Hard Drive from a Ubuntu Live CDUse an Ubuntu Live CD to Securely Wipe Your PC’s Hard DriveReset Your Ubuntu Password Easily from the Live CDBackup Your Windows Live Writer SettingsAdding extra Repositories on Ubuntu TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Awe inspiring, inter-galactic theme (Win 7) Case Study – How to Optimize Popular Wordpress Sites Restore Hidden Updates in Windows 7 & Vista Iceland an Insurance Job? Find Downloads and Add-ins for Outlook Recycle !

    Read the article

  • gl_PointCoord always zero

    - by Jonathan
    I am trying to draw point sprites in OpenGL with a shader but gl_PointCoord is always zero. Here is my code Setup: //Shader creation..(includes glBindAttribLocation(program, ATTRIB_P, "p");) glEnableVertexAttribArray(ATTRIB_P); In the rendering loop: glUseProgram(shader_particles); float vertices[]={0.0f,0.0f,0.0f}; glEnable(GL_TEXTURE_2D); glEnable(GL_POINT_SPRITE); glEnable(GL_VERTEX_PROGRAM_POINT_SIZE); //glTexEnvi(GL_POINT_SPRITE, GL_COORD_REPLACE, GL_TRUE);(tried with this on/off, doesn't work) glVertexAttribPointer(ATTRIB_P, 3, GL_FLOAT, GL_FALSE, 0, vertices); glDrawArrays(GL_POINTS, 0, 1); Vertex Shader: attribute highp vec4 p; void main() { gl_PointSize = 40.0f; gl_Position = p; } Fragment Shader: void main() { gl_FragColor = vec4(gl_PointCoord.st, 0, 1);//if the coords range from 0-1, this should draw a square with black,red,green,yellow corners } But this only draws a black square with a size of 40. What am I doing wrong? Edit: Point sprites work when i use the fixed function, but I need to use shaders because in the end the code will be for opengl es 2.0 glUseProgram(0); glEnable(GL_TEXTURE_2D); glEnable(GL_POINT_SPRITE); glTexEnvi(GL_POINT_SPRITE, GL_COORD_REPLACE, GL_TRUE); glPointSize(40); glBegin(GL_POINTS); glVertex3f(0.0f,0.0f,0.0f); glEnd(); Is anyone able to get point sprites working with shader? If so, please share some code.

    Read the article

  • PHP OCI8 and Oracle 11g DRCP Connection Pooling in Pictures

    - by christopher.jones
    Here is a screen shot from a PHP OCI8 connection pooling demo that I like to run. It graphically shows how little database host memory is needed when using DRCP connection pooling with Oracle Database 11g. Migrating to DRCP can be as simple as starting the pool and changing the connection string in your PHP application. The script that generated the data for this graph was a simple "Parts" query application being run under various simulated user loads. I was running the database on a small Oracle Linux server with just 2G of memory. I used PHP OCI8 1.4. Apache is in pre-fork mode, as needed for PHP. Each graph has time on the horizontal access in arbitrary 'tick' time units. Click the image to see it full sized. Pooled connections Beginning with the top left graph, At tick time 65 I used Apache's 'ab' tool to start 100 concurrent 'users' running the application. These users connected to the database using DRCP: $c = oci_pconnect('phpdemo', 'welcome', 'myhost/orcl:pooled'); A second hundred DRCP users were added to the system at tick 80 and a final hundred users added at tick 100. At about tick 110 I stopped the test and restarted Apache. This closed all the connections. The bottom left graph shows the number of statements being executed by the database per second, with some spikes for background database activity and some variability for this small test. Each extra batch of users adds another 'step' of load to the system. Looking at the top right Server Process graph shows the database server processes doing the query work for each web user. As user load is added, the DRCP server pool increases (in green). The pool is initially at its default size 4 and quickly ramps up to about (I'm guessing) 35. At tick time 100 the pool increases to my configured maximum of 40 processes. Those 40 processes are doing the query work for all 300 web users. When I stopped the test at tick 110, the pooled processes remained open waiting for more users to connect. If I had left the test quiet for the DRCP 'inactivity_timeout' period (300 seconds by default), the pool would have shrunk back to 4 processes. Looking at the bottom right, you can see the amount of memory being consumed by the database. During the initial quiet period about 500M of memory was in use. The absolute number is just an indication of my particular DB configuration. As the number of pooled processes increases, each process needs more memory. You can see the shape of the memory graph echoes the Server Process graph above it. Each of the 300 web users will also need a few kilobytes but this is almost too small to see on the graph. Non-pooled connections Compare the DRCP case with using 'dedicated server' processes. At tick 140 I started 100 web users who did not use pooled connections: $c = oci_pconnect('phpdemo', 'welcome', 'myhost/orcl'); This connection string change is the only difference between the two tests. At ticks 155 and 165 I started two more batches of 100 simulated users each. At about tick 195 I stopped the user load but left Apache running. Apache then gradually returned to its quiescent state, killing idle httpd processes and producing the downward slope at the right of the graphs as the persistent database connection in each Apache process was closed. The Executions per Second graph on the bottom left shows the same step increases as for the earlier DRCP case. The database is handling this load. But look at the number of Server processes on the top right graph. There is now a one-to-one correspondence between Apache/PHP processes and DB server processes. Each PHP processes has one DB server processes dedicated to it. Hence the term 'dedicated server'. The memory required on the database is proportional to all those database server processes started. Almost all my system's memory was consumed. I doubt it would have coped with any more user load. Summary Oracle Database 11g DRCP connection pooling significantly reduces database host memory requirements allow more system memory to be allocated for the SGA and allowing the system to scale to handled thousands of concurrent PHP users. Even for small systems, using DRCP allows more web users to be active. More information about PHP and DRCP can be found in the PHP Scalability and High Availability chapter of The Underground PHP and Oracle Manual.

    Read the article

  • Thunderbird compact is taking forever

    - by mulllhausen
    One day I came in to work and found that our development server - a Ubuntu box had a full hard disk. I did a bit of investigation using the du command and it seems like mozilla thunderbird is the major culprit. After burning off some backups, the disk was left at 94%: $ df -h Filesystem Size Used Avail Use% Mounted on /dev/sda1 895G 791G 59G 94% / none 4.0G 300K 4.0G 1% /dev none 4.0G 1.4M 4.0G 1% /dev/shm none 4.0G 140K 4.0G 1% /var/run none 4.0G 0 4.0G 0% /var/lock none 4.0G 0 4.0G 0% /lib/init/rw $ cd $ du -ch | grep [0-9]G 666G ./.thunderbird/ccsmcruu.default/ImapMail/mail.adofms.com.au 666G ./.thunderbird/ccsmcruu.default/ImapMail 667G ./.thunderbird/ccsmcruu.default 667G ./.thunderbird 2.2G ./.VirtualBox/Machines/iBike/Snapshots 2.2G ./.VirtualBox/Machines/iBike 2.2G ./.VirtualBox/Machines 2.2G ./.VirtualBox 670G . 670G total I did some reading and found that Mozilla Thunderbird does not compact files by default - i.e. all of the old emails that were sent to trash are still kept. One of the mailboxes used to get a lot of spam so I guess this accounts for the 667GB. I opened up Thunderbird to see how much space the inbox actually takes up and it turns out to be approximately 500MB - over 1000 times less than the stuff that has not been compacted over the years. So i right clicked on the inbox directory in the tree on the left of Thunderbird and selected 'compact'. I left it for about 12hours but even after that it still said 'compacting folder' on the status bar. I don't use Thunderbird on this PC - it belonged to a colleague who has left the company, however I do occasionally need to look through the inbox for references to the project I am working on, so deleting all traces of Thunderbird is not an option. My question is - is there any way I can monitor the progress of Thunderbird's compacting function? I would really like to know how long it is going to take. Also is there any way I can speed up the compacting process?

    Read the article

  • Lightning talk: Coderetreat

    - by Michael Williamson
    In the spirit of trying to encourage more deliberate practice amongst coders in Red Gate, Lauri Pesonen had the idea of running a coderetreat in Red Gate. Lauri and I ran the first one a few weeks ago: given that neither of us hadn’t even been to a coderetreat before, let alone run one, I think it turned out quite well. The participants gave positive feedback, saying that they enjoyed the day, wrote some thought-provoking code and would do it again. Sam Blackburn was one of the attendees, and gave a lightning talk to the other developers in one of our regular lightning talk sessions: In case you can’t watch the video, I’ve transcribed the talk below, although I’d recommend watching the video if you can — I didn’t have much time to do the transcribing! So, what is a coderetreat? So it’s not just something in Red Gate, there’s a website and everything, although it’s not a very big website. It calls itself a community network. The basic ideas behind coderetreat are: you’ve got one day, and you split it into one hour sections. You spend three quarters of that coding, and do a little retrospective at the end. You’re supposed to start fresh each, we were told to delete our code after every session. We were in pairs, swapping after each session, and we did the same task every time. In fact, Conway’s Game of Life is the only task mentioned anywhere that I find for coderetreat. So I don’t know what we’ll do next time, or if we’re meant to do the same thing again. There are some guiding principles which felt to us like restrictions, that you have to code in crazy ways to encourage better code. Final thing is that it’s supposed to be free for outsiders to join. It’s meant to be a kind of networking thing, where you link up with people from other companies. We had a pilot day with Michael and Lauri. Since it was basically the first time any of us had done anything like this, everybody was from Red Gate. We didn’t chat to anybody else for the initial one. The task was Conway’s Game of Life, which most of you have probably heard of it, all but one of us knew about it when did the coderetreat. I won’t got into the details of what it is, but it felt like the right size of task, basically one or two groups actually produced something working by the end of the day, and of course that doesn’t mean it’s necessarily a day’s work to produce that because we were starting again every hour. The task really drives you more than trying to create good code, I found. It was really tempting to try and get it working rather than stick to the rules. But it’s really good to stop and try again because there are so many what-ifs when you’ve finished writing something, “what if I’d done it this way?”. You can answer all those questions at a coderetreat because it’s not about getting a product out the door, it’s about learning and playing with ideas. So we had all these different practices we were trying. I’ll try and go through most of these. Single responsibility is this idea that everything should do just one thing. It was the very first session, we were still trying to figure out how do you go about the Game of Life? So by the end of forty-five minutes hadn’t produced very much for that first session. We were still thinking, “Do we start with a board, how do we represent all these squares? It can be infinitely big, help, this is getting really difficult!”. So, most of us didn’t really get anywhere on the first one. Although it was interesting that some people started with the board, one group started with the FateDecider class that decides whether things live or die. A sort of god class, but in a good way. They managed to implement all of the rules without even defining how the squares were arranged or anything like that. Another thing we tried was TDD (test-driven development). I’m sure most of you know what TDD is: Watch a test, watch it fail for the right reason Write code to pass the test, watch it pass Refactor, check the test still passes Repeat! It basically worked, we were able to produce code, but we often found the tests defined the direction that code went, which is obviously the idea of TDD. But you tend to find that by the time you’ve even written your first assertion, which is supposed to be the very first thing you write, because you write your tests backwards from the assertions back to the initial conditions, you’ve already constrained the logic of the code in some way by the time you’ve done that. You then get to this situation of, “Well, we actually want to go in a slightly different direction. Can we do this?”. Can we write tests that don’t constrain the architecture? Wrapping up all primitives: it’s kind of turtles all the way down. We had a Size, which has a Width and Height, which both derive from Dimension. You’ve got pages of code before you’ve even done anything. No getters and setters (use tell don’t ask instead): mocks and stubs for tests are required if you want to assert that your results are what you think they should be. You can’t just check the internal state of the code. And people found that really challenging and it made them think in a different way which I think is really good. Not having mutable state: that was kind of confusing because we weren’t quite sure what fitted within that rule and what didn’t, and I think we were trying too hard to follow the rule rather than the guideline. No if-statements: supposed to use polymorphism instead, but polymorphism still requires a factory with conditional behaviour. We did something really crazy to get around this: public T If(bool condition, Func<T> left, Func<T> right) { var dict = new Dictionary<bool, Func<T>> {{true, left}, {false, right}}; return dict[condition].Invoke(); } That is not really polymorphism, is it? For-loops: you can always replace a for-loop with recursion, but it doesn’t tend to make it any more readable unless it’s the kind of task that really lends itself to that. So it was interesting, it was good practice, but it wouldn’t make it easier it’s the kind of tree-structure algorithm where that would help. Having a limit on the number of levels of indentation: again, I think it does produce very nice, clean code, but it wasn’t actually a challenge because you just extract methods. That’s quite a useful thing because you can apply that to real code and say, “Okay, should this method really be going crazy like this?” No talking: we hated that. It’s like there’s two of you at a computer, and one of you is doing the typing, what does the other guy do if they’re not allowed to talk. The answer is TDD ping-pong – one person writes the tests, and then the other person writes the code to pass the test. And that creates communication without actually having to have discussion about things which is kind of cool. No code comments: just makes no difference to anything. It’s a forty-five minute exercise, so what are you going to put comments in code for? Finally, this is my fault. I discovered an entertaining way of doing the calculation that was kind of cool (using convolutions over the state of the board). Unfortunately, it turns out to be really hard to implement in C#, so didn’t even manage to work out how to do that convolution in C#. It’s trivial in some high-level languages, but you need something matrix-orientated for it to really work. That’s most of it, really. The thoughts that people went away with: we put down our answers to questions like “What have you learnt?” and “What surprised you?”, “How are you going to do things differently?”, and most people said redoing the problem is really, really good for understanding it properly. People hate having a massive legacy codebase that they can’t change, so being able to attack something three different ways in an environment where the end-product isn’t important: that’s something people really enjoyed. Pair-programming: also people said that they wanted to do more of that, especially with TDD ping-pong, where you write the test and somebody else writes the code. Various people thought different things about immutables, but most people thought they were good, they promote functional programming. And TDD people found really hard. “Tell, don’t ask” people found really, really hard and really, really, really hard to do well. And the recursion just made things trickier to debug. But most people agreed that coderetreats are really cool, and we should do more of them.

    Read the article

  • Only show windows 7 Preview Pane (in explorer) when the file has a preview

    - by Jonathan
    I use the preview pane often, especially with pdfs. But when selecting folders or files which don't have previews, or not even selecting anything, the preview pane stays, it's quite big, and I use lots when I have the explorer window maximised on my 1920x1080 monitor in this case it takes up about half my screen, but when I use explorer in a smaller window the preview pane shrinks the cneter folder pane and stays half the size of the window. Is there anyway to only show the preview pane when the file has a preview and then hide it again when the file doesn't or not file is selected. (btw, please don't say about alternate file browsers, as they all look ugly and complicated)

    Read the article

  • error while installing the libmemcached

    - by Ahmet vardar
    I get this while installing libmemcached root@server [/libmemcached]# make make all-am make[1]: Entering directory `/libmemcached' if /bin/sh ./libtool --tag=CXX --mode=compile g++ -DHAVE_CONFIG_H -I. -I. -I. -I. -I. -ggdb -DBUILDING_HASHKIT -MT libhashkit/libhashkit_libhashkit_la-aes.lo -MD -MP -MF "libhashkit/.deps/libhashkit_libhashkit_la-aes.Tpo" -c -o libhashkit/libhashkit_libhashkit_la-aes.lo `test -f 'libhashkit/aes.cc' || echo './'`libhashkit/aes.cc; \ then mv -f "libhashkit/.deps/libhashkit_libhashkit_la-aes.Tpo" "libhashkit/.deps/libhashkit_libhashkit_la-aes.Plo"; else rm -f "libhashkit/.deps/libhashkit_libhashkit_la-aes.Tpo"; exit 1; fi ./libtool: line 866: X--tag=CXX: command not found ./libtool: line 899: libtool: ignoring unknown tag : command not found ./libtool: line 866: X--mode=compile: command not found ./libtool: line 1032: *** Warning: inferring the mode of operation is deprecated.: command not found ./libtool: line 1033: *** Future versions of Libtool will require --mode=MODE be specified.: command not found ./libtool: line 1176: Xg++: command not found ./libtool: line 1176: X-DHAVE_CONFIG_H: command not found ./libtool: line 1176: X-I.: command not found ./libtool: line 1176: X-I.: command not found ./libtool: line 1176: X-I.: command not found ./libtool: line 1176: X-I.: command not found ./libtool: line 1176: X-I.: command not found ./libtool: line 1176: X-ggdb: command not found ./libtool: line 1176: X-DBUILDING_HASHKIT: command not found ./libtool: line 1176: X-MT: command not found ./libtool: line 1176: Xlibhashkit/libhashkit_libhashkit_la-aes.lo: No such file or directory ./libtool: line 1176: X-MD: command not found ./libtool: line 1176: X-MP: command not found ./libtool: line 1176: X-MF: command not found ./libtool: line 1176: Xlibhashkit/.deps/libhashkit_libhashkit_la-aes.Tpo: No such file or directory ./libtool: line 1176: X-c: command not found ./libtool: line 1228: Xlibhashkit/libhashkit_libhashkit_la-aes.lo: No such file or directory ./libtool: line 1233: libtool: compile: cannot determine name of library object from `': command not found make[1]: *** [libhashkit/libhashkit_libhashkit_la-aes.lo] Error 1 make[1]: Leaving directory `/libmemcached' make: *** [all] Error 2 OUTPUT OF ./configure checking build system type... x86_64-unknown-linux-gnu checking host system type... x86_64-unknown-linux-gnu checking target system type... x86_64-unknown-linux-gnu checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking for gawk... gawk checking whether make sets $(MAKE)... yes checking for style of include used by make... GNU checking for gcc... gcc checking whether the C compiler works... yes checking for C compiler default output file name... a.out checking for suffix of executables... checking whether we are cross compiling... no checking for suffix of object files... o checking whether we are using the GNU C compiler... yes checking whether gcc accepts -g... yes checking for gcc option to accept ISO C89... none needed checking dependency style of gcc... gcc3 checking dependency style of gcc... (cached) gcc3 checking how to run the C preprocessor... gcc -E checking for grep that handles long lines and -e... /bin/grep checking for egrep... /bin/grep -E checking for ANSI C header files... yes checking for sys/types.h... yes checking for sys/stat.h... yes checking for stdlib.h... yes checking for string.h... yes checking for memory.h... yes checking for strings.h... yes checking for inttypes.h... yes checking for stdint.h... yes checking for unistd.h... yes checking minix/config.h usability... no checking minix/config.h presence... no checking for minix/config.h... no checking whether it is safe to define __EXTENSIONS__... yes checking for isainfo... no checking for g++... g++ checking whether we are using the GNU C++ compiler... yes checking whether g++ accepts -g... yes checking dependency style of g++... gcc3 checking dependency style of g++... (cached) gcc3 checking whether gcc and cc understand -c and -o together... yes checking how to create a ustar tar archive... gnutar checking whether __SUNPRO_C is declared... no checking whether __ICC is declared... no checking "C Compiler version--yes"... "gcc (GCC) 4.1.2 20080704 (Red Hat 4.1.2-52)" checking "C++ Compiler version"... "g++ (GCC) 4.1.2 20080704 (Red Hat 4.1.2-52)" checking whether time.h and sys/time.h may both be included... yes checking whether struct tm is in sys/time.h or time.h... time.h checking for size_t... yes checking for special C compiler options needed for large files... no checking for _FILE_OFFSET_BITS value needed for large files... no checking for library containing clock_gettime... -lrt checking sys/socket.h usability... yes checking sys/socket.h presence... yes checking for sys/socket.h... yes checking size of off_t... 8 checking size of size_t... 8 checking size of long long... 8 checking if time_t is unsigned... no checking for setsockopt... yes checking for bind... yes checking whether the compiler provides atomic builtins... yes checking assert.h usability... yes checking assert.h presence... yes checking for assert.h... yes checking whether to enable assertions... yes checking whether it is safe to use -fdiagnostics-show-option... yes checking whether it is safe to use -floop-parallelize-all... no checking whether it is safe to use -Wextra... yes checking whether it is safe to use -Wformat... yes checking whether it is safe to use -Wconversion... no checking whether it is safe to use -Wmissing-declarations from C++... no checking whether it is safe to use -Wframe-larger-than... no checking whether it is safe to use -Wlogical-op... no checking whether it is safe to use -Wredundant-decls from C++... yes checking whether it is safe to use -Wattributes from C++... no checking whether it is safe to use -Wno-attributes... no checking for perl... perl checking for dpkg-gensymbols... no checking for lcov... no checking for genhtml... no checking for sphinx-build... no checking for working -pipe... yes checking for bison... bison checking for flex... flex checking how to print strings... printf checking for a sed that does not truncate output... /bin/sed checking for fgrep... /bin/grep -F checking for ld used by gcc... /usr/bin/ld checking if the linker (/usr/bin/ld) is GNU ld... yes checking for BSD- or MS-compatible name lister (nm)... /usr/bin/nm -B checking the name lister (/usr/bin/nm -B) interface... BSD nm checking whether ln -s works... yes checking the maximum length of command line arguments... 98304 checking whether the shell understands some XSI constructs... yes checking whether the shell understands "+="... yes checking how to convert x86_64-unknown-linux-gnu file names to x86_64-unknown-linux-gnu format... func_convert_file_noop checking how to convert x86_64-unknown-linux-gnu file names to toolchain format... func_convert_file_noop checking for /usr/bin/ld option to reload object files... -r checking for objdump... objdump checking how to recognize dependent libraries... pass_all checking for dlltool... no checking how to associate runtime and link libraries... printf %s\n checking for ar... ar checking for archiver @FILE support... @ checking for strip... strip checking for ranlib... ranlib checking command to parse /usr/bin/nm -B output from gcc object... ok checking for sysroot... no checking for mt... no checking if : is a manifest tool... no checking for dlfcn.h... yes checking for objdir... .libs checking if gcc supports -fno-rtti -fno-exceptions... no checking for gcc option to produce PIC... -fPIC -DPIC checking if gcc PIC flag -fPIC -DPIC works... yes checking if gcc static flag -static works... yes checking if gcc supports -c -o file.o... yes checking if gcc supports -c -o file.o... (cached) yes checking whether the gcc linker (/usr/bin/ld -m elf_x86_64) supports shared libraries... yes checking whether -lc should be explicitly linked in... no checking dynamic linker characteristics... GNU/Linux ld.so checking how to hardcode library paths into programs... immediate checking whether stripping libraries is possible... yes checking if libtool supports shared libraries... yes checking whether to build shared libraries... yes checking whether to build static libraries... yes checking how to run the C++ preprocessor... g++ -E checking for ld used by g++... /usr/bin/ld -m elf_x86_64 checking if the linker (/usr/bin/ld -m elf_x86_64) is GNU ld... yes checking whether the g++ linker (/usr/bin/ld -m elf_x86_64) supports shared libraries... yes checking for g++ option to produce PIC... -fPIC -DPIC checking if g++ PIC flag -fPIC -DPIC works... yes checking if g++ static flag -static works... yes checking if g++ supports -c -o file.o... yes checking if g++ supports -c -o file.o... (cached) yes checking whether the g++ linker (/usr/bin/ld -m elf_x86_64) supports shared libraries... yes checking dynamic linker characteristics... (cached) GNU/Linux ld.so checking how to hardcode library paths into programs... immediate checking whether the -Werror option is usable... yes checking for simple visibility declarations... yes checking for ISO C++ 98 include files... checking whether memcached executable path has been provided... no checking for memcached... /usr/local/bin/memcached checking whether memcached_sasl executable path has been provided... no checking for memcached_sasl... no checking whether gearmand executable path has been provided... no checking for gearmand... no checking libgearman/gearmand.h usability... no checking libgearman/gearmand.h presence... no checking for libgearman/gearmand.h... no checking for library containing getopt_long... none required checking for library containing gethostbyname... none required checking for the pthreads library -lpthreads... no checking whether pthreads work without any flags... yes checking for joinable pthread attribute... PTHREAD_CREATE_JOINABLE checking if more special flags are required for pthreads... no checking for PTHREAD_PRIO_INHERIT... yes checking the location of cstdint... configure: WARNING: Could not find a cstdint header. <stdint.h> checking the location of cinttypes... configure: WARNING: Could not find a cinttypes header. <inttypes.h> checking whether byte ordering is bigendian... no checking for htonll... no checking for working SO_SNDTIMEO... yes checking for working SO_RCVTIMEO... yes checking for supported struct padding... yes checking for alarm... yes checking for dup2... yes checking for getline... yes checking for gettimeofday... yes checking for memchr... yes checking for memmove... yes checking for memset... yes checking for pipe2... no checking for select... yes checking for setenv... yes checking for socket... yes checking for sqrt... yes checking for strcasecmp... yes checking for strchr... yes checking for strdup... yes checking for strerror... yes checking for strtol... yes checking for strtoul... yes checking for strtoull... yes checking arpa/inet.h usability... yes checking arpa/inet.h presence... yes checking for arpa/inet.h... yes checking fcntl.h usability... yes checking fcntl.h presence... yes checking for fcntl.h... yes checking libintl.h usability... yes checking libintl.h presence... yes checking for libintl.h... yes checking limits.h usability... yes checking limits.h presence... yes checking for limits.h... yes checking malloc.h usability... yes checking malloc.h presence... yes checking for malloc.h... yes checking netdb.h usability... yes checking netdb.h presence... yes checking for netdb.h... yes checking netinet/in.h usability... yes checking netinet/in.h presence... yes checking for netinet/in.h... yes checking stddef.h usability... yes checking stddef.h presence... yes checking for stddef.h... yes checking sys/time.h usability... yes checking sys/time.h presence... yes checking for sys/time.h... yes checking execinfo.h usability... yes checking execinfo.h presence... yes checking for execinfo.h... yes checking cxxabi.h usability... yes checking cxxabi.h presence... yes checking for cxxabi.h... yes checking sys/sysctl.h usability... yes checking sys/sysctl.h presence... yes checking for sys/sysctl.h... yes checking umem.h usability... no checking umem.h presence... no checking for umem.h... no checking for C++ compiler vendor... gnu checking for working alloca.h... yes checking for alloca... yes checking for error_at_line... yes checking for pid_t... yes checking vfork.h usability... no checking vfork.h presence... no checking for vfork.h... no checking for fork... yes checking for vfork... yes checking for working fork... yes checking for working vfork... (cached) yes checking for stdlib.h... (cached) yes checking for GNU libc compatible malloc... yes checking for stdlib.h... (cached) yes checking for GNU libc compatible realloc... yes checking whether strerror_r is declared... yes checking for strerror_r... yes checking whether strerror_r returns char *... yes checking for stdbool.h that conforms to C99... yes checking for _Bool... no checking for int16_t... yes checking for int32_t... yes checking for int64_t... yes checking for int8_t... yes checking for off_t... yes checking for pid_t... (cached) yes checking for ssize_t... yes checking for uint16_t... yes checking for uint32_t... yes checking for uint64_t... yes checking for uint8_t... yes checking whether byte ordering is bigendian... (cached) no checking for an ANSI C-conforming const... yes checking for inline... inline checking for working volatile... yes checking for C/C++ restrict keyword... __restrict checking whether the compiler supports GCC C++ ABI name demangling... yes checking sasl/sasl.h usability... no checking sasl/sasl.h presence... no checking for sasl/sasl.h... no checking uuid/uuid.h usability... yes checking uuid/uuid.h presence... yes checking for uuid/uuid.h... yes checking for main in -luuid... yes checking for clock_gettime in -lrt... yes checking for floor in -lm... yes checking for sigignore... yes checking atomic.h usability... no checking atomic.h presence... no checking for atomic.h... no checking for setppriv... no checking for winsock2.h... no checking for poll.h... yes checking for sys/wait.h... yes checking for fnmatch.h... yes checking for MSG_NOSIGNAL... yes checking for MSG_DONTWAIT... yes checking for MSG_MORE... yes checking event.h usability... yes checking event.h presence... yes checking for event.h... yes checking for main in -levent... yes checking for endianness... little configure: creating ./config.status config.status: creating Makefile config.status: creating docs/conf.py config.status: creating libhashkit-1.0/configure.h config.status: creating libmemcached-1.0/configure.h config.status: creating libmemcached-1.2/configure.h config.status: creating libmemcached-2.0/configure.h config.status: creating support/libmemcached.pc config.status: creating support/libmemcached.spec config.status: creating support/libmemcached-fc.spec config.status: creating libtest/version.h config.status: creating config.h config.status: config.h is unchanged config.status: executing depfiles commands config.status: executing libtool commands --- Configuration summary for libmemcached version 1.0.6 * Installation prefix: /usr/local * System type: unknown-linux-gnu * Host CPU: x86_64 * C Compiler: gcc (GCC) 4.1.2 20080704 (Red Hat 4.1.2-52) * Assertions enabled: yes * Debug enabled: no * Warnings as failure: no * SASL support: --- anyone knows how to solve this ?

    Read the article

  • 2.5D game development

    - by ne5tebiu
    2.5D ("two-and-a-half-dimensional"), 3/4 perspective and pseudo-3D are terms used to describe either: graphical projections and techniques which cause a series of images or scenes to fake or appear to be three-dimensional (3D) when in fact they are not, or gameplay in an otherwise three-dimensional video game that is restricted to a two-dimensional plane. (Information taken from Wikipedia.org) I have a question based on 2.5D game development. As stated before, 2.5D uses graphical projections and techniques to make fake 3d or a gameplay restricted to a two-dimensional plane. A good example is a TQ Digital made game: Zero Online (screenshot) the whole map is made of 2d images and only NPCs and players are 3d. The maps were drawn manually by hand without any 3d software rendering. As I'm playing the game I feel like I'm going from a lower part of the map (ground) to a higher one (some metal platform) and it feels like I'm moving in 3 dimensions. But when I look closely, I see that the player size didn't change and the shadow too but I'm still feeling like I'm somehow higher then before (I had rendered a simple map myself that I made in 3dmax but it didn't quite give the result I wanted). How to accomplish such an effect?

    Read the article

  • How to move linux executables to a ramdisk?

    - by alfa64
    i've made a ramdisk this way: mkdir -p /media/ramdisk mount -t tmpfs -o size=512M tmpfs /media/ramdisk/ The reason for this is because i run a lot of node.js scripts and their execution time is very small, but i suspect that the time overhead is because it reloads the node.js executable from disk and destroys it on each subsecuent run. So i think this might be the solution to gain a bit, if not much, performance. How can i move a program like node to the ramdisk and run it from there? The idea is to have a startup script that creates the ramdisk and puts the node files inside of it. Note that i'm currently using fedora 16 for what's it's worth. Thanks in advance.

    Read the article

  • Resize Image thru Slider in Silverlight

    - by Sayre Collado
    Hello Guys, I've been playing with slider on silverlight. Now the result is this, a simple resizing image thru slider.  The Image below is the default size of my sample. And the second Image below are the result when the slider slide to right and top. The xaml layout are very simple: <Slider Minimum="80" Maximum="238" Height="23" HorizontalAlignment="Center" Name="sldBottom" Width="246" Margin="27,226,27,1" /> <Slider Height="212" Minimum="80" Maximum="209" Name="sldRight" Width="28" Orientation="Vertical" Margin="271,9,1,29" /> <Image HorizontalAlignment="Center" Name="image1" Stretch="Fill" VerticalAlignment="Center" Source="/GBLOgs2;component/Images/logosai.JPG" Height="{Binding ElementName=sldRight,Path=Value}" Width="{Binding ElementName=sldBottom,Path=Value}" /> The Image1 Height are depending to the maximum value of sldRight and its value same with the situation of Image1 Width. The Image1 Height/Width = {Binding ElementName="NAME OF THE SLIDER", Path="THE VALUE OF SLIDER"}. When you slide the slider the image will resize. And thats all. Happy Programming.

    Read the article

  • Move Firefox’s Tab Bar to the Top

    - by Asian Angel
    Would you prefer to have Firefox’s Tab Bar located at the top of the browser instead of its’ default location? See how easy it is to move the Tab Bar back and forth between the top and current positions “flip switch style” with the Tabs On Top extension. Note: Tabs On Top extension supports multi-row feature in TabMixPlus. Before You can see the “Tab Bar” in its’ default location here in our test browser…not bad but what if you prefer having it located at the top of the browser? After As soon as you have installed the extension and restarted Firefox the “Tab Bar” will have automatically moved to the top of the browser. You will most likely notice a slight decrease in tab height as well (which occurred during our tests). To move the “Tab Bar” back and forth between the top and default locations just select/deselect “Tab Bar on top” in the “Toolbars Context Menu”. You can quickly reduce the size of the upper UI after hiding some of the other toolbars and go even further if you like using extensions that will hide the “Title Bar”. This is definitely a good UI matching extension for anyone using a Chrome based theme in Firefox. Conclusion If you are unhappy with default location for Firefox’s “Tab Bar” then this extension will certainly provide an alternative option for you. Links Download the Tabs On Top extension (Mozilla Add-ons) Similar Articles Productive Geek Tips Use the Keyboard to Move Items Up or Down in Microsoft WordAdd Copy To / Move To on Windows 7 or Vista Right-Click MenuBring Misplaced Off-Screen Windows Back to Your Desktop (Keyboard Trick)Moving Your Personal Data Folders in Windows Vista the Easy WayAdd Copy To / Move To to the Windows Explorer Right Click Menu TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Live PDF Searches PDF Files and Ebooks Converting Mp4 to Mp3 Easily Use Quick Translator to Translate Text in 50 Languages (Firefox) Get Better Windows Search With UltraSearch Scan News With NY Times Article Skimmer SpeedyFox Claims to Speed up your Firefox

    Read the article

  • Update RDS db via mysqlbinlog: "you need (at least one of) the SUPER privilege(s)"

    - by timoxley
    We are moving a production site to EC2/RDS Followed these instructions: http://geehwan.posterous.com/moving-a-production-mysql-database-to-amazon I have set up row-based binary logging on the production server did a: mysqldump --single-transaction --master-data=2 -C -q -u root -p backup.sql then imported to RDS instance. No dramas. Due to the size of the db, and minimal downtime requirements, I've got to update the ec2 db to the latest datas via the binlogs, and it won't let me. mysqlbinlog mysql-bin.000004 --start-position=360812488 | mysql -uroot -p -h and it says: ERROR 1227 (42000) at line 6: Access denied; you need (at least one of) the SUPER privilege(s) for this operation My guess, based on what is on line 6 of the binlog, is that it's the 'write to the BINLOG' statements in the SQL backup, and because RDS doesn't support this, it can't run these statements, or something, I don't really know. Please help.

    Read the article

  • Imperative vs. LINQ Performance on WP7

    - by Bil Simser
    Jesse Liberty had a nice post presenting the concepts around imperative, LINQ and fluent programming to populate a listbox. Check out the post as it’s a great example of some foundational things every .NET programmer should know. I was more interested in what the IL code that would be generated from imperative vs. LINQ was like and what the performance numbers are and how they differ. The code at the instruction level is interesting but not surprising. The imperative example with it’s creating lists and loops weighs in at about 60 instructions. .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } 1: .method private hidebysig instance void ImperativeMethod() cil managed 2: { 3: .maxstack 3 4: .locals init ( 5: [0] class [mscorlib]System.Collections.Generic.IEnumerable`1<int32> someData, 6: [1] class [mscorlib]System.Collections.Generic.List`1<int32> inLoop, 7: [2] int32 n, 8: [3] class [mscorlib]System.Collections.Generic.IEnumerator`1<int32> CS$5$0000, 9: [4] bool CS$4$0001) 10: L_0000: nop 11: L_0001: ldc.i4.1 12: L_0002: ldc.i4.s 50 13: L_0004: call class [mscorlib]System.Collections.Generic.IEnumerable`1<int32> [System.Core]System.Linq.Enumerable::Range(int32, int32) 14: L_0009: stloc.0 15: L_000a: newobj instance void [mscorlib]System.Collections.Generic.List`1<int32>::.ctor() 16: L_000f: stloc.1 17: L_0010: nop 18: L_0011: ldloc.0 19: L_0012: callvirt instance class [mscorlib]System.Collections.Generic.IEnumerator`1<!0> [mscorlib]System.Collections.Generic.IEnumerable`1<int32>::GetEnumerator() 20: L_0017: stloc.3 21: L_0018: br.s L_003a 22: L_001a: ldloc.3 23: L_001b: callvirt instance !0 [mscorlib]System.Collections.Generic.IEnumerator`1<int32>::get_Current() 24: L_0020: stloc.2 25: L_0021: nop 26: L_0022: ldloc.2 27: L_0023: ldc.i4.5 28: L_0024: cgt 29: L_0026: ldc.i4.0 30: L_0027: ceq 31: L_0029: stloc.s CS$4$0001 32: L_002b: ldloc.s CS$4$0001 33: L_002d: brtrue.s L_0039 34: L_002f: ldloc.1 35: L_0030: ldloc.2 36: L_0031: ldloc.2 37: L_0032: mul 38: L_0033: callvirt instance void [mscorlib]System.Collections.Generic.List`1<int32>::Add(!0) 39: L_0038: nop 40: L_0039: nop 41: L_003a: ldloc.3 42: L_003b: callvirt instance bool [mscorlib]System.Collections.IEnumerator::MoveNext() 43: L_0040: stloc.s CS$4$0001 44: L_0042: ldloc.s CS$4$0001 45: L_0044: brtrue.s L_001a 46: L_0046: leave.s L_005a 47: L_0048: ldloc.3 48: L_0049: ldnull 49: L_004a: ceq 50: L_004c: stloc.s CS$4$0001 51: L_004e: ldloc.s CS$4$0001 52: L_0050: brtrue.s L_0059 53: L_0052: ldloc.3 54: L_0053: callvirt instance void [mscorlib]System.IDisposable::Dispose() 55: L_0058: nop 56: L_0059: endfinally 57: L_005a: nop 58: L_005b: ldarg.0 59: L_005c: ldfld class [System.Windows]System.Windows.Controls.ListBox PerfTest.MainPage::LB1 60: L_0061: ldloc.1 61: L_0062: callvirt instance void [System.Windows]System.Windows.Controls.ItemsControl::set_ItemsSource(class [mscorlib]System.Collections.IEnumerable) 62: L_0067: nop 63: L_0068: ret 64: .try L_0018 to L_0048 finally handler L_0048 to L_005a 65: } 66:   67: Compare that to the IL generated for the LINQ version which has about half of the instructions and just gets the job done, no fluff. .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } 1: .method private hidebysig instance void LINQMethod() cil managed 2: { 3: .maxstack 4 4: .locals init ( 5: [0] class [mscorlib]System.Collections.Generic.IEnumerable`1<int32> someData, 6: [1] class [mscorlib]System.Collections.Generic.IEnumerable`1<int32> queryResult) 7: L_0000: nop 8: L_0001: ldc.i4.1 9: L_0002: ldc.i4.s 50 10: L_0004: call class [mscorlib]System.Collections.Generic.IEnumerable`1<int32> [System.Core]System.Linq.Enumerable::Range(int32, int32) 11: L_0009: stloc.0 12: L_000a: ldloc.0 13: L_000b: ldsfld class [System.Core]System.Func`2<int32, bool> PerfTest.MainPage::CS$<>9__CachedAnonymousMethodDelegate6 14: L_0010: brtrue.s L_0025 15: L_0012: ldnull 16: L_0013: ldftn bool PerfTest.MainPage::<LINQProgramming>b__4(int32) 17: L_0019: newobj instance void [System.Core]System.Func`2<int32, bool>::.ctor(object, native int) 18: L_001e: stsfld class [System.Core]System.Func`2<int32, bool> PerfTest.MainPage::CS$<>9__CachedAnonymousMethodDelegate6 19: L_0023: br.s L_0025 20: L_0025: ldsfld class [System.Core]System.Func`2<int32, bool> PerfTest.MainPage::CS$<>9__CachedAnonymousMethodDelegate6 21: L_002a: call class [mscorlib]System.Collections.Generic.IEnumerable`1<!!0> [System.Core]System.Linq.Enumerable::Where<int32>(class [mscorlib]System.Collections.Generic.IEnumerable`1<!!0>, class [System.Core]System.Func`2<!!0, bool>) 22: L_002f: ldsfld class [System.Core]System.Func`2<int32, int32> PerfTest.MainPage::CS$<>9__CachedAnonymousMethodDelegate7 23: L_0034: brtrue.s L_0049 24: L_0036: ldnull 25: L_0037: ldftn int32 PerfTest.MainPage::<LINQProgramming>b__5(int32) 26: L_003d: newobj instance void [System.Core]System.Func`2<int32, int32>::.ctor(object, native int) 27: L_0042: stsfld class [System.Core]System.Func`2<int32, int32> PerfTest.MainPage::CS$<>9__CachedAnonymousMethodDelegate7 28: L_0047: br.s L_0049 29: L_0049: ldsfld class [System.Core]System.Func`2<int32, int32> PerfTest.MainPage::CS$<>9__CachedAnonymousMethodDelegate7 30: L_004e: call class [mscorlib]System.Collections.Generic.IEnumerable`1<!!1> [System.Core]System.Linq.Enumerable::Select<int32, int32>(class [mscorlib]System.Collections.Generic.IEnumerable`1<!!0>, class [System.Core]System.Func`2<!!0, !!1>) 31: L_0053: stloc.1 32: L_0054: ldarg.0 33: L_0055: ldfld class [System.Windows]System.Windows.Controls.ListBox PerfTest.MainPage::LB2 34: L_005a: ldloc.1 35: L_005b: callvirt instance void [System.Windows]System.Windows.Controls.ItemsControl::set_ItemsSource(class [mscorlib]System.Collections.IEnumerable) 36: L_0060: nop 37: L_0061: ret 38: } Again, not surprising here but a good indicator that you should consider using LINQ where possible. In fact if you have ReSharper installed you’ll see a squiggly (technical term) in the imperative code that says “Hey Dude, I can convert this to LINQ if you want to be c00L!” (or something like that, it’s the 2010 geek version of Clippy). What about the fluent version? As Jon correctly pointed out in the comments, when you compare the IL for the LINQ code and the IL for the fluent code it’s the same. LINQ and the fluent interface are just syntactical sugar so you decide what you’re most comfortable with. At the end of the day they’re both the same. Now onto the numbers. Again I expected the imperative version to be better performing than the LINQ version (before I saw the IL that was generated). Call it womanly instinct. A gut feel. Whatever. Some of the numbers are interesting though. For Jesse’s example of 50 items, the numbers were interesting. The imperative sample clocked in at 7ms while the LINQ version completed in 4. As the number of items went up, the elapsed time didn’t necessarily climb exponentially. At 500 items they were pretty much the same and the results were similar up to about 50,000 items. After that I tried 500,000 items where the gap widened but not by much (2.2 seconds for imperative, 2.3 for LINQ). It wasn’t until I tried 5,000,000 items where things were noticeable. Imperative filled the list in 20 seconds while LINQ took 8 seconds longer (although personally I wouldn’t suggest you put 5 million items in a list unless you want your users showing up at your door with torches and pitchforks). Here’s the table with the full results. Method/Items 50 500 5,000 50,000 500,000 5,000,000 Imperative 7ms 7ms 38ms 223ms 2230ms 20974ms LINQ/Fluent 4ms 6ms 41ms 240ms 2310ms 28731ms Like I said, at the end of the day it’s not a huge difference and you really don’t want your users waiting around for 30 seconds on a mobile device filling lists. In fact if Windows Phone 7 detects you’re taking more than 10 seconds to do any one thing, it considers the app hung and shuts it down. The results here are for Windows Phone 7 but frankly they're the same for desktop and web apps so feel free to apply it generally. From a programming perspective, choose what you like. Some LINQ statements can get pretty hairy so I usually fall back with my simple mind and write it imperatively. If you really want to impress your friends, write it old school then let ReSharper do the hard work for! Happy programming!

    Read the article

  • Linux partitioning problem

    - by Claudiu
    I am using cfdisk to repartition my hdd as from OS install I only got 1 big partition a swap. I wanted to resize the big partition to 1 GB /boot and use the rest of the space for an extended partition. After I do cfdisk, I recheck the partitions with fdisk -l and I get these: Disk /dev/sda: 320 GB, 320070320640 bytes 255 heads, 63 sectors/track, 38913 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda3 1 38455 308881755 f Extended LBA Warning: Partition 3 does not end on cylinder boundary. /dev/sda2 38455 38698 1951897 82 Linux swap /dev/sda1 * 38699 38913 311349654 83 Linux My problem is the Warning message, I think I know the cause, I think its because of sda1 Blocks size. How could that be soo big if Start and End interval is small?

    Read the article

< Previous Page | 492 493 494 495 496 497 498 499 500 501 502 503  | Next Page >