Search Results

Search found 41882 results on 1676 pages for 'png files'.

Page 75/1676 | < Previous Page | 71 72 73 74 75 76 77 78 79 80 81 82  | Next Page >

  • C++ Database vs Reading Files

    - by Ohmages
    Ive been programing a C++ game/server for the past year. I have been using MYSQL for character logins, items, monsters, etc, etc. (im on windows). My question is, what are some of the databases that some big time developers use. IE. Battle.net, Diablo II, Diablo III, mythos, hellgate , etc, etc, etc. Do they have their own database they built? Or do they use an existing framework for logins, and character transfers. I do know that in diablo II, they use character files to to transfer characters into the game world. But what about the login into battle.net. Would it be wiser for me to stick with MYSQL, or is there something out there faster and more stable, or should I create a login type of system that looks through a file to see if you provided the correct password. Can't wait to get some replies. Thanks! PS. Currently the framework is much like battle.net, where you login into a lobby, create, and join games. The game server/lobby server are different servers too. So im just wondering about the lobby server for logins because I'm expecting several hundred thousand connections/logins.

    Read the article

  • Javascript object dependencies

    - by Anurag
    In complex client side projects, the number of Javascript files can get very large. However, for performance reasons it's good to concatenate these files, and compress the resulting file for sending over the wire. I am having problems in concatenating these as the dependencies are included after they are needed in some cases. For instance, there are 2 files: /modules/Module.js <requires Core.js> /modules/core/Core.js The directories are recursively traversed, and Module.js gets included before Core.js, which causes errors. This is just a simple example where dependencies could span across directories, and there could be other complex cases. There are no circular dependencies though. The Javascript structure I follow is similar to Java packages, where each file defines a single Object (I'm using MooTools, but that's irrelevant). The structure of each javascript file and the dependencies is always consistent: Module.js var Module = new Class({ Implements: Core, ... }); Core.js var Core = new Class({ ... }); What practices do you usually follow to handle dependencies in projects where the number of Javascript files is huge, and there are inter-file dependencies?

    Read the article

  • System.InvalidOperationException: Failed to map the path '/sharedDrive/Public'

    - by d03boy
    I'm trying to set up a page that will allow users to download files from a shared drive (where the file is actually sent via the page). So far this is what I have. public partial class TestPage : System.Web.UI.Page { protected DirectoryInfo dir; protected FileInfo[] files; protected void Page_Load(object sender, EventArgs e) { dir = new DirectoryInfo(Server.MapPath(@"\\sharedDrive\Public")); files = dir.GetFiles(); } } The aspx page looks kind of like this: <% Response.Write(System.Security.Principal.WindowsIdentity.GetCurrent().Name); %> <ul> <% foreach (System.IO.FileInfo f in files) { Response.Write("<li>" + f.FullName + "</li>"); } %> </ul> When I remove the erroneous parts of the code, the page tells me that the Windows Identity I'm using is my user (which has access to the drive). I don't understand what the problem could be or what it's even complaining about.

    Read the article

  • How do I check to see if my subview is being touched?

    - by Amy
    I went through this tutorial about how to animate sprites: http://icodeblog.com/2009/07/24/iphone-programming-tutorial-animating-a-game-sprite/ I've been attempting to expand on the tutorial by trying to make Ryu animate only when he is touched. However, the touch is not even being registered and I believe it has something to do with it being a subview. Here is my code: -(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event{ UITouch *touch = [touches anyObject]; if([touch view] == ryuView){ NSLog(@"Touch"); } else { NSLog(@"No touch"); } } -(void) ryuAnims{ NSArray *imageArray = [[NSArray alloc] initWithObjects: [UIImage imageNamed:@"1.png"], [UIImage imageNamed:@"2.png"], [UIImage imageNamed:@"3.png"], [UIImage imageNamed:@"4.png"], [UIImage imageNamed:@"5.png"], [UIImage imageNamed:@"6.png"], [UIImage imageNamed:@"7.png"], [UIImage imageNamed:@"8.png"], [UIImage imageNamed:@"9.png"], [UIImage imageNamed:@"10.png"], [UIImage imageNamed:@"11.png"], [UIImage imageNamed:@"12.png"], nil]; ryuView.animationImages = imageArray; ryuView.animationDuration = 1.1; [ryuView startAnimating]; } -(void)viewDidLoad { [super viewDidLoad]; UIImageView *image = [[UIImageView alloc] initWithFrame: CGRectMake(100, 125, 150, 130)]; ryuView = image; ryuView.image = [UIImage imageNamed:@"1.png"]; ryuView.contentMode = UIViewContentModeBottomLeft; [self.view addSubview:ryuView]; [image release]; } This code compiles fine, however, when touching or clicking on ryu, nothing happens. I've also tried if([touch view] == ryuView.image) but that gives me this error: "Comparison of distinct Objective-C type 'struct UIImage *' and 'struct UIView *' lacks a cast." What am I doing wrong?

    Read the article

  • Configure IIS7 to server static content through ASP.NET Runtime

    - by Anton Gogolev
    I searched high an low and still cannot find a definite answer. How do I configure IIS 7.0 or a Web Application in IIS so that ASP.NET Runtime will handle all requests -- including ones to static files like *.js, *.gif, etc? What I'm trying to do is as follows. We have kind of SaaSy site, which we can "skin" for every customer. "Skinnig" means developing a custom master page and using a bunch of *.css and other images. Quite naturally, I'm using VirtualPathProvider, which operates like this: public override System.Web.Hosting.VirtualFile GetFile(string virtualPath) { if(PhysicalFileExists(virtualPath)) { var virtualFile = base.GetFile(virtualPath); return virtualFile; } if(VirtualFileExists(virtualPath)) { var brandedVirtualPath = GetBrandedVirtualPath(virtualPath); var absolutePath = HttpContext.Current.Server.MapPath(brandedVirtualPath); Trace.WriteLine(string.Format("Serving '{0}' from '{1}'", brandedVirtualPath, absolutePath), "BrandingAwareVirtualPathProvider"); var virtualFile = new VirtualFile(brandedVirtualPath, absolutePath); return virtualFile; } return null; } The basic idea is as follows: we have a branding folder inside our webapp, which in turn contains folders for each "brand", with "brand" being equal to host name. That is, requests to http://foo.example.com/ should use static files from branding/foo_example_com, whereas http://bar.example.com/ should use content from branding/bar_example_com. Now what I want IIS to do is to forward all requests to static files to StaticFileHandler, which would then use this whole "infrastructure" and serve correct files. However, try as I might, I cannot configure IIS to do this.

    Read the article

  • Recursive Perl detail need help

    - by Catarrunas
    Hi everybody, i think this is a simple problem, but i'm stuck with it for some time now! I need a fresh pair of eyes on this. The thing is i have this code in perl: #!c:/Perl/bin/perl use CGI qw/param/; use URI::Escape; print "Content-type: text/html\n\n"; my $directory = param ('directory'); $directory = uri_unescape ($directory); my @contents; readDir($directory); foreach (@contents) { print "$_\n"; } #------------------------------------------------------------------------ sub readDir(){ my $dir = shift; opendir(DIR, $dir) or die $!; while (my $file = readdir(DIR)) { next if ($file =~ m/^\./); if(-d $dir.$file) { #print $dir.$file. " ----- DIR\n"; readDir($dir.$file); } push @contents, ($dir . $file); } closedir(DIR); } I've tried to make it recursive. I need to have all the files of all of the directories and subdirectories, with the full path, so that i can open the files in the future. But my output only returns the files in the current directory and the files in the first directory that it finds. If i have 3 folders inside the directory it only shows the first one. Ex. of cmd call: "perl readDir.pl directory=C:/PerlTest/" Thanks

    Read the article

  • Run command with space characters in bash script

    - by ??iu
    I have a file that contains a list of files: 02 of Clubs.eps 02 of Diamonds.eps 02 of Hearts.eps 02 of Spades.eps ... I am attempting to mass-convert these to png format in several sizes. The script I am using to do this is: while read -r line do for i in 80 35 200 do convert $(sed 's/ /\\ /g' <<< Cards/${line}) -size ${i}x${i} ../img/card/$(basename $(tr ' ' '_' <<< ${line} | tr '[A-Z]' '[a-z]') .eps)_${i}.png; done done < card_list.txt However, this doesn't work, apparently trying to split on each word, resulting in the following error output: convert: unable to open image `Cards/02\': No such file or directory @ error/blob.c/OpenBlob/2514. convert: no decode delegate for this image format `Cards/02\' @ error/constitute.c/ReadImage/532. convert: unable to open image `of\': No such file or directory @ error/blob.c/OpenBlob/2514. convert: no decode delegate for this image format `of\' @ error/constitute.c/ReadImage/532. convert: unable to open image `Clubs.eps': No such file or directory @ error/blob.c/OpenBlob/2514. If I change the convert to an echo the result looks right and if I copy a line and run it myself in the shell it works fine: convert Cards/02\ of\ Clubs.eps -size 80x80 ../img/card/02_of_clubs_80.png convert Cards/02\ of\ Clubs.eps -size 35x35 ../img/card/02_of_clubs_35.png convert Cards/02\ of\ Clubs.eps -size 200x200 ../img/card/02_of_clubs_200.png convert Cards/02\ of\ Diamonds.eps -size 80x80 ../img/card/02_of_diamonds_80.png convert Cards/02\ of\ Diamonds.eps -size 35x35 ../img/card/02_of_diamonds_35.png convert Cards/02\ of\ Diamonds.eps -size 200x200 ../img/card/02_of_diamonds_200.png convert Cards/02\ of\ Hearts.eps -size 80x80 ../img/card/02_of_hearts_80.png convert Cards/02\ of\ Hearts.eps -size 35x35 ../img/card/02_of_hearts_35.png convert Cards/02\ of\ Hearts.eps -size 200x200 ../img/card/02_of_hearts_200.png convert Cards/02\ of\ Spades.eps -size 80x80 ../img/card/02_of_spades_80.png UPDATE: Just adding quotes (see below) has the same result as the above, where I had been using sed to add backslashes convert '"'Cards/${line}'"' -size ${i}x${i} ../img/card/$(basename $(tr ' ' '_' <<< ${line} | tr '[A-Z]' '[a-z]') .eps)_${i}.png; I've tried both double and single quotes

    Read the article

  • Makefile - Dependency generation

    - by Profetylen
    I am trying to create a makefile that automatically compiles and links my .cpp files into an executable via .o files. What I can't get working is automated (or even manual) dependency generation. When i uncomment the below commented code, nothing is recompiled when i run make build. All i get is make: Nothing to be done for 'build'., even if x.h (or any .h file) has changed. I've been trying to learn from this question: Makefile, header dependencies, dmckee's answer, especially. Why isn't this makefile working? Clarification: I can compile everything, but when I modify any header file, the .cpp files that depend on it aren't updated. So, if I for instance compile my entire source, then I change a #define in the header file, and then run make build, and I get Nothing to be done for 'build'. (when I have uncommented either commented chunks of the below code). CC=gcc CFLAGS=-O2 -Wall LDFLAGS=-lSDL -lstdc++ SOURCES=$(wildcard *.cpp) OBJECTS=$(patsubst %.cpp, obj/%.o,$(SOURCES)) TARGET=bin/test.bin # Nothing happens when i uncomment the following. (automated attempt) #depend: .depend # #.depend: $(SOURCES) # rm -f ./.depend # $(CC) $(CFLAGS) -MM $^ >> ./.depend; # #include .depend # And nothing happens when i uncomment the following. x.cpp and x.h are files in my project. (manual attempt) #x.o: x.cpp x.h clean: rm -f $(TARGET) rm -f $(OBJECTS) run: build ./$(TARGET) debug: build nm $(TARGET) gdb $(TARGET) build: $(TARGET) $(TARGET): $(OBJECTS) @mkdir -p $(@D) $(CC) $(LDFLAGS) $(OBJECTS) -o $@ obj/%.o: %.cpp @mkdir -p $(@D) $(CC) -c $(CFLAGS) $< -o $@ include $(DEPENDENCIES)

    Read the article

  • How can I write to a single xml file from two programs at the same time?

    - by Tom Bushell
    We recently started working with XML files, after many years of experience with the old INI files. My coworker found some CodeProject sample code that uses System.Xml.XmlDocument.Save. We are getting exceptions when two programs try to write to the same file at the same time. System.IO.IOException: The process cannot access the file 'C:\Test.xml' because it is being used by another process. This seems obvious in hindsight, but we had not anticipated it because accessing INI files via the Win32 API does not have this limitation. I assume there's some arbitration done by the Win32 calls that work at a higher level than the the XmlDocument.Save method. I'm hoping there are higher level XML routines somewhere in the .Net library that work similarily to the Win32 functions, but don't know where to start looking. Or maybe we can set up our file access permissions to allow multiple programs to write to the same file? Time is short (like almost all SW projects), and if we can't find a solution quickly, we'll have to hold our noses and go back to INI files.

    Read the article

  • Dividing web.config into multiple files in asp.net

    - by Jalpesh P. Vadgama
    When you are having different people working on one project remotely you will get some problem with web.config, as everybody was having different version of web.config. So at that time once you check in your web.config with your latest changes the other people have to get latest that web.config and made some specific changes as per their local environment. Most of people who have worked things from remotely has faced that problem. I think most common example would be connection string and app settings changes. For this kind of situation this will be a best solution. We can divide particular section of web.config into the multiple files. For example we could have separate ConnectionStrings.config file for connection strings and AppSettings.config file for app settings file. Most of people does not know that there is attribute called ‘configSource’ where we can  define the path of external config file and it will load that section from that external file. Just like below. <configuration> <appSettings configSource="AppSettings.config"/> <connectionStrings configSource="ConnectionStrings.config"/> </configuration> And you could have your ConnectionStrings.config file like following. <connectionStrings> <add name="DefaultConnection" connectionString="Data Source=(LocalDb)\v11.0;Initial Catalog=aspnet-WebApplication1-20120523114732;Integrated Security=True" providerName="System.Data.SqlClient" /> </connectionStrings> Same way you have another AppSettings.Config file like following. <appSettings> <add key="aspnet:UseTaskFriendlySynchronizationContext" value="true" /> <add key="ValidationSettings:UnobtrusiveValidationMode" value="WebForms" /> </appSettings> That's it. Hope you like this post. Stay tuned for more..

    Read the article

  • My Ubuntu Touch seems to be broken no matter how many different files I try

    - by zeokila
    So I'm planning on testing out Ubuntu Touch, and developing some applications for it so I thought I would flash it to my Nexus 4 that was already unlocked, and running Paranoid Android and the kernel associated. I headed to Ubuntu's website, browsed around and came across this page: Touch/Install - Ubuntu Wiki I followed Step 1 perfectly, word by word. Its seems to me that everything on that part is fine. I skipped Step 2 having already done that for Paranoid Android, and then I follow 3 and 4 word to word also. Using the command phablet-flash -b everything seemed to be fine. So it booted up and all seemed normal, but it wasn't. Here are some major bugs that only seem to happen to me. So I was greeted with a normal lock screen: But one of the first noticable things on the home screen is that I only have 4 tabs, not 5: Some of the applications that are supposed to work do not (I know some are dummies) like here with the calculator, this is what I get: On the homescreen it is a black box, it will crash a couple of seconds later: Another annoying problem is that when I want to close apps, I have from right to left, if I close an app on the left first it will close it, but it will open the app to it's right, weird: Yet another bug, this one is in the pull down drawer, when you just click on it, you can see that not all the icons are there: Pretty much everything else works as it should, but my main problem is that there is no telephony, I'm not sure how it works exactly, but I'm never asked a SIM code (I'm guessing you need that?), I can't compose SMS's and can't dial numbers, it won't let me select the 'Send' or 'Call' button. I've after tried manual installs all over the place with these files: http://cdimage.ubuntu.com/ubuntu-touch-preview/daily-preinstalled/current/saucy-preinstalled-armel+mako.zip (Says 44MB on site - 46.6MB on my laptop) http://cdimage.ubuntu.com/ubuntu-touch-preview/daily-preinstalled/current/saucy-preinstalled-phablet-armhf.zip (Says 366MB on site - 383.2MB on my laptop) There are some weird size differences between what the site told me and what I downloaded, but I've tried re-downloading just to end up with the same file. And it just alway ends up with the same problems. No telephony and those weird bugs. So my question is, how the hell can I get the same version as everyone else, with the ability to send texts and call and open the calculator and more? Also, definitely running saucy: And maybe useful? This is what's in the device info:

    Read the article

  • dpkg stuck downloading font files

    - by Bob Bowles
    I have been reinstalling Ubuntu 12.04. The install from USB works fine, and I could update everything OK, but when I got to re-installing my application software I hit a snag. One of the packages I tried to re-install was ttf-mscorefonts-installer. dpkg stalled during this setup, downloading a font file (it had tried to download it all night). I stopped dpkg, and attempted to re-start downloading something else, but it would not let me. The commands I typed are as follows: bob@bobStudio:~$ sudo rm /var/lib/dpkg/lock This unlocks dpkg, but if I try to do something I get the following message (eg): bob@bobStudio:~$ sudo apt-get install synaptic E: dpgk was interrupted, you must manually run 'sudo dpkg --configure -a' to correct the problem So, I did just that: bob@bobStudio:~$ sudo dpkg --configure -a whereupon it started the previously failed download all over again. I went round the loop here a few times and each time after the configure command it re-started the failing download, but then I got this: bob@bobStudio:~$ sudo dpkg --configure -a Setting up update-notifier-common (0.119ubuntu8.4) ... ttf-mscorefonts-installer: downloading http://downloads.sourceforge.net/corefonts/andale32.exe Traceback (most recent call last): File "/usr/lib/update-notifier/package-data-downloader", line 234, in process_download_requests dest_file = urllib.urlretrieve(files[i])[0] File "/usr/lib/python2.7/urllib.py", line 93, in urlretrieve return _urlopener.retrieve(url, filename, reporthook, data) File "/usr/lib/python2.7/urllib.py", line 239, in retrieve fp = self.open(url, data) File "/usr/lib/python2.7/urllib.py", line 207, in open return getattr(self, name)(url) File "/usr/lib/python2.7/urllib.py", line 344, in open_http h.endheaders(data) File "/usr/lib/python2.7/httplib.py", line 954, in endheaders self._send_output(message_body) File "/usr/lib/python2.7/httplib.py", line 814, in _send_output self.send(msg) File "/usr/lib/python2.7/httplib.py", line 776, in send self.connect() File "/usr/lib/python2.7/httplib.py", line 757, in connect self.timeout, self.source_address) File "/usr/lib/python2.7/socket.py", line 553, in create_connection for res in getaddrinfo(host, port, 0, SOCK_STREAM): IOError: [Errno socket error] [Errno -2] Name or service not known Setting up ttf-mscorefonts-installer (3.4ubuntu3) ... bob@bobStudio:~$ sudo apt-get update E: Could not get lock /var/lib/apt/lists/lock - open (11: Resource temporarily unavailable) E: Unable to lock directory /var/lib/apt/lists/ bob@bobStudio:~$ sudo rm /var/lib/dpkg/lock bob@bobStudio:~$ sudo apt-get update E: Could not get lock /var/lib/apt/lists/lock - open (11: Resource temporarily unavailable) E: Unable to lock directory /var/lib/apt/lists/ The good news is that, once I sorted out the file locks, this seems to have permanently aborted the setup of the font package, so at least I can do something else with dpkg. That leaves two questions: 1) How could I have broken the loop without actually crashing out of dpkg? 2) How can I set up the ttf-mscorefonts-installer package in the future? Is this download really broken, or is it 'just' a bad Internet connection?

    Read the article

  • Importing PKCS#12 (.p12) files into Firefox From the Command Line

    - by user11165
    I’ve posted this question up on #Ubuntu and #Firefox Forums, and really could do with some help.. Anyone know where i could look or help with the answer. I’m hoping the power of social media will come through… I have a need to perform the following action: Firefox 3.6.x: Quote: open Edit - Preferences - Advanced - Encryption - View Certificates - Your Certificates - Import However i need the same functionality from the bash command line. So far I’ve established that the following command is supposed to be used: Quote: certutil -A -t “u,u,u” -d /home/df001/.mozilla/firefox/qe5y5lht.tc.default/ -n “mycert” -i client.p12 This executes with no isses, however, doesn’t show up in any Firefox Certificate store. However, I have noted that prior to running this command, i have a cert8.db key3.db and secmod.db file in the above folder. After running the command the certutil seems to have created a cert9.db, key4.db and pkcs12.txt file Listing the contents using the command: Quote: certutil -L -d sql:/home/df001/.mozilla/firefox/qe5y5lht.tc.default/ does seem to confirm my attempts of importing files into a certificate folder of some kind have worked. because i get Quote: Certificate Nickname Trust Attributes SSL,S/MIME,JAR/XPI Thawte SSL CA „ Go Daddy Secure Certification Authority „ Thawte SGC CA „ Entrust Certification Authority - L1C „ My Nero CT,C,c mynero P„ davidfield - Internet Widgits Pty Ltd u,u,u So, having tried this, and heading back over to the www, i cam across this command: Quote: pk12util -d /home/df001/.mozilla/firefox/qe5y5lht.tc.default/ -i client.p12 -n “David Field” -P “cert8.db” this again, appears to be importing something somewhere, however, again, Viewing certs from the Firefox interface doesn’t show the imported Cert. I’m surmising here on reading that the certutil and pk12util are creating a new NSS database, which firefox isn’t reading. So my question is, how can i get the p12 cert from the command line so it displays in the firefox Certificate manager interface? Why have i posted this here? Why not post on the firefox forum? Well i will copy and post the same question there as well, however the ability to use the command line to do this is important, as I have potentially 2000 machines which will need a user cert imported into firefox via a p12 file. I need to do this in the form of a script, i thought the hard part was going to be making the p12 file from the microsoft 2003 CA, turns out thats easy. I can’t just import via the GUI and copy over cert8.db x 2000, i can’t ask users to use the CA webinterface as its for VPN access, the users are off site, and they need the VPN to get to the cert server.. Is there any person out there who can help? By the way, i don't have the tor buttun installed.

    Read the article

  • ZFS for Database Log Files

    - by user12620111
    I've been troubled by drop outs in CPU usage in my application server, characterized by the CPUs suddenly going from close to 90% CPU busy to almost completely CPU idle for a few seconds. Here is an example of a drop out as shown by a snippet of vmstat data taken while the application server is under a heavy workload. # vmstat 1  kthr      memory            page            disk          faults      cpu  r b w   swap  free  re  mf pi po fr de sr s3 s4 s5 s6   in   sy   cs us sy id  1 0 0 130160176 116381952 0 16 0 0 0 0  0  0  0  0  0 207377 117715 203884 70 21 9  12 0 0 130160160 116381936 0 25 0 0 0 0 0  0  0  0  0 200413 117162 197250 70 20 9  11 0 0 130160176 116381920 0 16 0 0 0 0 0  0  1  0  0 203150 119365 200249 72 21 7  8 0 0 130160176 116377808 0 19 0 0 0 0  0  0  0  0  0 169826 96144 165194 56 17 27  0 0 0 130160176 116377800 0 16 0 0 0 0  0  0  0  0  1 10245 9376 9164 2  1 97  0 0 0 130160176 116377792 0 16 0 0 0 0  0  0  0  0  2 15742 12401 14784 4 1 95  0 0 0 130160176 116377776 2 16 0 0 0 0  0  0  1  0  0 19972 17703 19612 6 2 92  14 0 0 130160176 116377696 0 16 0 0 0 0 0  0  0  0  0 202794 116793 199807 71 21 8  9 0 0 130160160 116373584 0 30 0 0 0 0  0  0 18  0  0 203123 117857 198825 69 20 11 This behavior occurred consistently while the application server was processing synthetic transactions: HTTP requests from JMeter running on an external machine. I explored many theories trying to explain the drop outs, including: Unexpected JMeter behavior Network contention Java Garbage Collection Application Server thread pool problems Connection pool problems Database transaction processing Database I/O contention Graphing the CPU %idle led to a breakthrough: Several of the drop outs were 30 seconds apart. With that insight, I went digging through the data again and looking for other outliers that were 30 seconds apart. In the database server statistics, I found spikes in the iostat "asvc_t" (average response time of disk transactions, in milliseconds) for the disk drive that was being used for the database log files. Here is an example:                     extended device statistics     r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 2053.6    0.0 8234.3  0.0  0.2    0.0    0.1   0  24 c3t60080E5...F4F6d0s0     0.0 2162.2    0.0 8652.8  0.0  0.3    0.0    0.1   0  28 c3t60080E5...F4F6d0s0     0.0 1102.5    0.0 10012.8  0.0  4.5    0.0    4.1   0  69 c3t60080E5...F4F6d0s0     0.0   74.0    0.0 7920.6  0.0 10.0    0.0  135.1   0 100 c3t60080E5...F4F6d0s0     0.0  568.7    0.0 6674.0  0.0  6.4    0.0   11.2   0  90 c3t60080E5...F4F6d0s0     0.0 1358.0    0.0 5456.0  0.0  0.6    0.0    0.4   0  55 c3t60080E5...F4F6d0s0     0.0 1314.3    0.0 5285.2  0.0  0.7    0.0    0.5   0  70 c3t60080E5...F4F6d0s0 Here is a little more information about my database configuration: The database and application server were running on two different SPARC servers. Storage for the database was on a storage array connected via 8 gigabit Fibre Channel Data storage and log file were on different physical disk drives Reliable low latency I/O is provided by battery backed NVRAM Highly available: Two Fibre Channel links accessed via MPxIO Two Mirrored cache controllers The log file physical disks were mirrored in the storage device Database log files on a ZFS Filesystem with cutting-edge technologies, such as copy-on-write and end-to-end checksumming Why would I be getting service time spikes in my high-end storage? First, I wanted to verify that the database log disk service time spikes aligned with the application server CPU drop outs, and they did: At first, I guessed that the disk service time spikes might be related to flushing the write through cache on the storage device, but I was unable to validate that theory. After searching the WWW for a while, I decided to try using a separate log device: # zpool add ZFS-db-41 log c3t60080E500017D55C000015C150A9F8A7d0 The ZFS log device is configured in a similar manner as described above: two physical disks mirrored in the storage array. This change to the database storage configuration eliminated the application server CPU drop outs: Here is the zpool configuration: # zpool status ZFS-db-41   pool: ZFS-db-41  state: ONLINE  scan: none requested config:         NAME                                     STATE         ZFS-db-41                                ONLINE           c3t60080E5...F4F6d0  ONLINE         logs           c3t60080E5...F8A7d0  ONLINE Now, the I/O spikes look like this:                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1053.5    0.0 4234.1  0.0  0.8    0.0    0.7   0  75 c3t60080E5...F8A7d0s0                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1131.8    0.0 4555.3  0.0  0.8    0.0    0.7   0  76 c3t60080E5...F8A7d0s0                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1167.6    0.0 4682.2  0.0  0.7    0.0    0.6   0  74 c3t60080E5...F8A7d0s0     0.0  162.2    0.0 19153.9  0.0  0.7    0.0    4.2   0  12 c3t60080E5...F4F6d0s0                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1247.2    0.0 4992.6  0.0  0.7    0.0    0.6   0  71 c3t60080E5...F8A7d0s0     0.0   41.0    0.0   70.0  0.0  0.1    0.0    1.6   0   2 c3t60080E5...F4F6d0s0                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1241.3    0.0 4989.3  0.0  0.8    0.0    0.6   0  75 c3t60080E5...F8A7d0s0                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1193.2    0.0 4772.9  0.0  0.7    0.0    0.6   0  71 c3t60080E5...F8A7d0s0 We can see the steady flow of 4k writes to the ZIL device from O_SYNC database log file writes. The spikes are from flushing the transaction group. Like almost all problems that I run into, once I thoroughly understand the problem, I find that other people have documented similar experiences. Thanks to all of you who have documented alternative approaches. Saved for another day: now that the problem is obvious, I should try "zfs:zfs_immediate_write_sz" as recommended in the ZFS Evil Tuning Guide. References: The ZFS Intent Log Solaris ZFS, Synchronous Writes and the ZIL Explained ZFS Evil Tuning Guide: Cache Flushes ZFS Evil Tuning Guide: Tuning ZFS for Database Performance

    Read the article

  • Should I add the vcxproj.filter files to source control

    - by jschroedl
    While evaluating Visual Studio 2010 Beta 2, I see that in the converted directory my vcproj files have become vcxproj files. There are also vcxproj.filter files alongside each project which appear to contain a description of the folder structure (\Source Files, \Header Files, etc.). Do you think these filter files should be kept per-user or should the be shared across the whole dev group and checked into SCC? My current thinking is to check them in but wondered if there's any reasons not to do that or perhaps good reasons that I should definitely check them in. The obvious benefit is that the folder structures will match if I'm looking at someone else's machine but maybe they'd like to reorganize things logically??

    Read the article

  • How to find out where or if MYSQL5 logs are stored on a machine WHM/Cpanel

    - by moi
    I have a WHM/Cpanel re-seller hosting account on a virtual private server (Linux). I have root access to the machine via SSH I am trying to locate a file that contains information that will help me to determine which users have accessed what db and from which hosts. I would imagine this kind of data is stored in a log file somewhere. The MySQL page says: The general query log - Established client connections and statements received from clients See: http://dev.mysql.com/doc/refman/5.0/en/server-logs.html It also says: By default, all log files are created in the mysqld data directory. So, I am am NOT asking where are the general query log logs stored, (cos I expect I will get answers saying "it depends") Please help me work out: "How can go about finding out where MySQL general query log logs are stored on a linux machine" Couple of things i've already tried: I looked at /etc/my.cnf it was a tiny file that only contained the following info: [mysqld] skip-bdb skip-innodb set-variable = max_connections=500 safe-show-database ~ ~ I have looked in: /var/lib/mysql/ But I could not see any log-like file names in that directory. Any clues on this would be most welcome.

    Read the article

  • Windows batch file to delete .svn files and folders

    - by Marco Demaio
    Hi,in order to delete all ".svn" files/folders/subfolders in "myfolder" I use this simple line in a batch file: FOR /R myfolder %%X IN (.svn) DO (RD /S /Q "%%X") This works, but if there are no ".svn" files/folders the batch file shows a warning saying: "The system cannot find the file specified." This warning is very noisy so I was wondering how to make it understand that if it doesn't find any ".svn" files/folders he must skip the RD command. Usually using wild cards would suffice, but in this case I don't know how to use them, because I don't want to delete files/folders with .svn extension, but I want to delete the files/folders named exactly ".svn", so if I do this: FOR /R myfolder %%X IN (*.svn) DO (RD /S /Q "%%X") it would NOT delete files/folders named extaclty ".svn" anymore. I tried also this: FOR /R myfolder %%X IN (.sv*) DO (RD /S /Q "%%X") but it doesn't work either, he deletes nothing. Thanks for any help!

    Read the article

  • How to sign installation files of a Visual Studio .msi

    - by Alex
    This may be a duplicate, though I can't find it at this time. If so please point me in the right direction. I recently purchased an authenticode certificate from globalsign and am having problems signing my files for deployment. There are a couple of .exe files that are generated by a project and then put into a .msi. When I sign the .exe files with the signtool the certificate is valid and they run fine. The problem is that when I build the .msi (using the visual studio setup project) the .exe files loose their signatures. So I can sign the .msi after it is built, but the installed .exe files continue the whole "unknown publisher" business. How can I retain the signature on these files for installation on the client machine? You help is appreciated. -Alex

    Read the article

  • Eclipse CDT: Import source / header files into my new project, without duplicating them

    - by Tom
    Hi all, Im sure there is a very simple solution for this. I have a bunch of .cpp / .h files from a project, say in directory ~/files On the other hand, I want to create a c++ project using eclipse to work on those files, so I put my workspace on ~/wherever. Then I create a c++ project: ~/wherever/project, and include the source files (located in /~files). The problem i'm having is that files are now duplicated in ~/wherever/project, and I would like to avoid that, specially so I know which copy of the file to commit. Is this possible? Im sure it is, but cant get it. Thanks in advance.

    Read the article

  • Chinese encoding issue while listing files

    - by Null Pointer
    I am running a Java application on a Solaris10 with Chinese. Now there are some files in a directory with chinese filenames. When I do files = new File(dir).list() where "dir" is the parent directory containing that chinese file, I get the result filename files[0] as ?????(some junk characters). Now the deal is that my programs file.encoding property is already set to GBK and I also do Charset.isSupported("GBK") and it returns true too. So where could be the problem. I am running out of ideas. NOTE: I am not trying to print the filename anywhere or copy the file or something. I am simply openeing a stream to it, something like below: files = new File(dir).list(); new FileInputStream(files[0]); Now this gives me a FileNotFoundExcpetion, so I debug just to find that value inside files[0] is "??????".

    Read the article

  • Perl - Internal File (create and execute)

    - by drewrockshard
    I have a quick question about creating files with perl and executing them. I wanted to know if it was possible to generate a file using perl (I actually need a .bat script) and then execute this file internally to the program. I know I can create files, and I have with perl, however, I'm wanting to do this internally to the program. So, what I want it to do is actually create a batch script internally to the program (no file is actually written to the disk, everything remains in memory, or the perl program), and then once it completes the writing of the file, I'd like to be able to actually execute this file, and then discard the file it just wrote. I'm basically trying to have it create a batch script on the fly, so that I can just have output text files from the output of the script, rather than creating the batch script on disk, then executing it, and then deleting the batch file from disk when its done. Can this be done and how would I go about doing this? Regards, Drew

    Read the article

  • Change Check Out Folder for checked out files in SourceSafe

    - by Town
    I had to rebuild my machine and went from XP to Windows 7. I've now got a bit of an issue: I had files checked out in SourceSafe previously, which I still have copies of in the local folder on my new install. However, SourceSafe still has them checked out to the old XP folder (c:\documents and settings etc) whereas the files now reside in c:\Users. Pending Checkins in Visual Studio now thinks I have nothing checked out, and SourceSafe declares that the files are checked out to me under the c:\documents and settings\ path. Is there any way to tell SourceSafe to simply "look over there" for the files instead? It seems to work with individually undoing and redoing checkout on the files, but that's a lengthy process and one I'd like to avoid if possible. If I simply checkout the files individually then it lists them as checked out to me twice, one for each of the locations. Any pointers would be very much appreciated!

    Read the article

  • How to enable error log in lighttpd properly?

    - by Tomaszs
    I have a Centos 5 system with Lighttpd and fastcgi enabled. It does log access but does not log errors. I have Internal Server Error 500 and no info in log and when I try to open not -existing file also - no info in error log. How to enable it properly? Below is list of modules that I've enabled: server.modules = ( "mod_rewrite", "mod_redirect", "mod_alias", # "mod_access", # "mod_cml", # "mod_trigger_b4_dl", # "mod_auth", "mod_status", "mod_setenv", "mod_fastcgi", # "mod_webdav", # "mod_proxy_core", # "mod_proxy_backend_fastcgi", # "mod_proxy_backend_scgi", # "mod_proxy_backend_ajp13", # "mod_simple_vhost", # "mod_evhost", # "mod_userdir", # "mod_cgi", # "mod_compress", # "mod_ssi", # "mod_usertrack", # "mod_expire", # "mod_secdownload", # "mod_rrdtool", "mod_accesslog" ) Here are setting of debugging: ## enable debugging #debug.log-request-header = "enable" #debug.log-response-header = "enable" #debug.log-request-handling = "enable" debug.log-file-not-found = "enable" #debug.log-condition-handling = "enable" Setting of path to error and access log: ## where to send error-messages to server.errorlog = "/home/lxadmin/httpd/lighttpd/error.log" #### accesslog module accesslog.filename = "/home/lxadmin/httpd/lighttpd/ligh.log" Settings of fastcgi: fastcgi.debug = 1 fastcgi.server = ( ".php" => (( "bin-path" => "/usr/bin/php-cgi", "socket" => "/tmp/php.socket", "max-procs" => 12, "bin-environment" => ( "PHP_FCGI_CHILDREN" => "2", "PHP_FCGI_MAX_REQUESTS" => "500" ) ))) And in included config file I have: server.errorlog = "/home/httpd/mywebsite.com/stats/mywebsite.com-error_log" What comes to log files: /home/httpd/mywebsite.com/stats/ -rw-r--r-- 1 apache apache 5173239 May 16 11:34 mywebsite.com-custom_log -rwxrwxrwx 1 root root 0 Mar 27 2009 mywebsite.com-error_log /home/lxadmin/httpd/lighttpd/ -rwxrwxrwx 1 apache apache 2184 Apr 22 22:59 error.log -rwxrwxrwx 1 apache apache 6088621 May 16 11:26 ligh.log I gave error logs chmod 777 for a try to check if it's the issue, but apparently it's not. So my question is: what to do to have error log enabled?

    Read the article

  • C# Binary File Compare

    - by Simon Farrow
    I'm in a situation where I want to compare two binary files. One of them is already stored on the server with a pre-calculated Crc32 in the database from when I stored it originally. I know that if the Crc is different then the files are definitely different. However, if the Crc is the same I don't know that the files are. So what I'm looking for is a nice efficient way of comparing the two streams on from the posted file and one from the file system. I'm not an expert on streams but I'm well aware that I could easily shoot myself in the foot here as far as memory usage is concerned. Any help is greatly appreciated.

    Read the article

  • Using Visual sudio .ncb file for reflection.

    - by Rushi
    I am developing visual game level editor in c++. For this I want reflection(RTTI) mechanism to know class attributes at runtime. I am currently using PDB files for this.But using PDB I couldn't retrieve actual code line for extra information in commented format which is given for that attribute. Visual studio uses NCB files for intelligence. So will it be better idea to use NCB instead PDB? If yes,How to retrieve information from NCB files? Is there any SDK like DIA SDK?

    Read the article

< Previous Page | 71 72 73 74 75 76 77 78 79 80 81 82  | Next Page >