Search Results

Search found 61297 results on 2452 pages for 'open files'.

Page 131/2452 | < Previous Page | 127 128 129 130 131 132 133 134 135 136 137 138  | Next Page >

  • Ubuntu automatic logout whenever I execute exe files

    - by KeepTrying
    I have a problem. Here's the thing. There were 4 partitions in my hard drive: One for ubuntu root folder One for ubuntu home folder One for general stuffs like music, movies... And the last one for SWAP To install Windows 7, I resized partitions and moving the order of partitions by using GParted. I moved all of the ext formatted partitions to the left, so that means the spare space would be at the right. And I formatted that spare space in NTFS and install windows 7. After successfully installing windows 7, I used LiveUSB to fix grub. I installed Boot Repair and, with just one click, now I can dual boot ubuntu and windows 7. But, the point, because of changing the order of partitions, especially the partition consisting of home folder, I couldn't log in the ubuntu. I used recovery mode and changed file /etc/passwd. Everything almost got back to normal except one thing. The windows apps that I installed via wine don't work anymore. I run them via accessing menu Applications/Wine/Programs but nothing loads. One more thing, when I double click on exe files to run them, ubuntu suddenly log outs. Thank you for reading my post, it's quite long and my English is fairly poor. I'd appreciate for anyone who reads it.

    Read the article

  • I deleted all files and folders (including hidden) from /home/username/ now in big trouble

    - by jeffery_the_wind
    I am logged into a remote ubuntu server, and I accidentally erased the entire /home/username/ directory for the current user. The only thing left is a hidden directory called .gvfs. I don't need anything of the Documents/Music/etc. Now it is not letting me cd into the /var/www/ directory, which has permissions 666 and it is owned by the current user. I am afraid to disconnect from my ssh session because I don't know if I will be able to get back on. Have I permanently created a problem? Is there a way I can replace the most important files to the /home/username/ directory? Thanks! ** EDIT ** Thanks everyone for the help. I figured the problem with cd into the /var/www/ was actually my permissions in the /var/www/ directory. It was set to 666, changed it to 755 and everything was good again. It doesn't look like anything systematic was ruined by deleting the contents of the user folder.

    Read the article

  • C++ Database vs Reading Files

    - by Ohmages
    Ive been programing a C++ game/server for the past year. I have been using MYSQL for character logins, items, monsters, etc, etc. (im on windows). My question is, what are some of the databases that some big time developers use. IE. Battle.net, Diablo II, Diablo III, mythos, hellgate , etc, etc, etc. Do they have their own database they built? Or do they use an existing framework for logins, and character transfers. I do know that in diablo II, they use character files to to transfer characters into the game world. But what about the login into battle.net. Would it be wiser for me to stick with MYSQL, or is there something out there faster and more stable, or should I create a login type of system that looks through a file to see if you provided the correct password. Can't wait to get some replies. Thanks! PS. Currently the framework is much like battle.net, where you login into a lobby, create, and join games. The game server/lobby server are different servers too. So im just wondering about the lobby server for logins because I'm expecting several hundred thousand connections/logins.

    Read the article

  • Javascript object dependencies

    - by Anurag
    In complex client side projects, the number of Javascript files can get very large. However, for performance reasons it's good to concatenate these files, and compress the resulting file for sending over the wire. I am having problems in concatenating these as the dependencies are included after they are needed in some cases. For instance, there are 2 files: /modules/Module.js <requires Core.js> /modules/core/Core.js The directories are recursively traversed, and Module.js gets included before Core.js, which causes errors. This is just a simple example where dependencies could span across directories, and there could be other complex cases. There are no circular dependencies though. The Javascript structure I follow is similar to Java packages, where each file defines a single Object (I'm using MooTools, but that's irrelevant). The structure of each javascript file and the dependencies is always consistent: Module.js var Module = new Class({ Implements: Core, ... }); Core.js var Core = new Class({ ... }); What practices do you usually follow to handle dependencies in projects where the number of Javascript files is huge, and there are inter-file dependencies?

    Read the article

  • System.InvalidOperationException: Failed to map the path '/sharedDrive/Public'

    - by d03boy
    I'm trying to set up a page that will allow users to download files from a shared drive (where the file is actually sent via the page). So far this is what I have. public partial class TestPage : System.Web.UI.Page { protected DirectoryInfo dir; protected FileInfo[] files; protected void Page_Load(object sender, EventArgs e) { dir = new DirectoryInfo(Server.MapPath(@"\\sharedDrive\Public")); files = dir.GetFiles(); } } The aspx page looks kind of like this: <% Response.Write(System.Security.Principal.WindowsIdentity.GetCurrent().Name); %> <ul> <% foreach (System.IO.FileInfo f in files) { Response.Write("<li>" + f.FullName + "</li>"); } %> </ul> When I remove the erroneous parts of the code, the page tells me that the Windows Identity I'm using is my user (which has access to the drive). I don't understand what the problem could be or what it's even complaining about.

    Read the article

  • Configure IIS7 to server static content through ASP.NET Runtime

    - by Anton Gogolev
    I searched high an low and still cannot find a definite answer. How do I configure IIS 7.0 or a Web Application in IIS so that ASP.NET Runtime will handle all requests -- including ones to static files like *.js, *.gif, etc? What I'm trying to do is as follows. We have kind of SaaSy site, which we can "skin" for every customer. "Skinnig" means developing a custom master page and using a bunch of *.css and other images. Quite naturally, I'm using VirtualPathProvider, which operates like this: public override System.Web.Hosting.VirtualFile GetFile(string virtualPath) { if(PhysicalFileExists(virtualPath)) { var virtualFile = base.GetFile(virtualPath); return virtualFile; } if(VirtualFileExists(virtualPath)) { var brandedVirtualPath = GetBrandedVirtualPath(virtualPath); var absolutePath = HttpContext.Current.Server.MapPath(brandedVirtualPath); Trace.WriteLine(string.Format("Serving '{0}' from '{1}'", brandedVirtualPath, absolutePath), "BrandingAwareVirtualPathProvider"); var virtualFile = new VirtualFile(brandedVirtualPath, absolutePath); return virtualFile; } return null; } The basic idea is as follows: we have a branding folder inside our webapp, which in turn contains folders for each "brand", with "brand" being equal to host name. That is, requests to http://foo.example.com/ should use static files from branding/foo_example_com, whereas http://bar.example.com/ should use content from branding/bar_example_com. Now what I want IIS to do is to forward all requests to static files to StaticFileHandler, which would then use this whole "infrastructure" and serve correct files. However, try as I might, I cannot configure IIS to do this.

    Read the article

  • Recursive Perl detail need help

    - by Catarrunas
    Hi everybody, i think this is a simple problem, but i'm stuck with it for some time now! I need a fresh pair of eyes on this. The thing is i have this code in perl: #!c:/Perl/bin/perl use CGI qw/param/; use URI::Escape; print "Content-type: text/html\n\n"; my $directory = param ('directory'); $directory = uri_unescape ($directory); my @contents; readDir($directory); foreach (@contents) { print "$_\n"; } #------------------------------------------------------------------------ sub readDir(){ my $dir = shift; opendir(DIR, $dir) or die $!; while (my $file = readdir(DIR)) { next if ($file =~ m/^\./); if(-d $dir.$file) { #print $dir.$file. " ----- DIR\n"; readDir($dir.$file); } push @contents, ($dir . $file); } closedir(DIR); } I've tried to make it recursive. I need to have all the files of all of the directories and subdirectories, with the full path, so that i can open the files in the future. But my output only returns the files in the current directory and the files in the first directory that it finds. If i have 3 folders inside the directory it only shows the first one. Ex. of cmd call: "perl readDir.pl directory=C:/PerlTest/" Thanks

    Read the article

  • Python halts while iteratively processing my 1GB csv file

    - by Dan
    I have two files: metadata.csv: contains an ID, followed by vendor name, a filename, etc hashes.csv: contains an ID, followed by a hash The ID is essentially a foreign key of sorts, relating file metadata to its hash. I wrote this script to quickly extract out all hashes associated with a particular vendor. It craps out before it finishes processing hashes.csv stored_ids = [] # this file is about 1 MB entries = csv.reader(open(options.entries, "rb")) for row in entries: # row[2] is the vendor if row[2] == options.vendor: # row[0] is the ID stored_ids.append(row[0]) # this file is 1 GB hashes = open(options.hashes, "rb") # I iteratively read the file here, # just in case the csv module doesn't do this. for line in hashes: # not sure if stored_ids contains strings or ints here... # this probably isn't the problem though if line.split(",")[0] in stored_ids: # if its one of the IDs we're looking for, print the file and hash to STDOUT print "%s,%s" % (line.split(",")[2], line.split(",")[4]) hashes.close() This script gets about 2000 entries through hashes.csv before it halts. What am I doing wrong? I thought I was processing it line by line. ps. the csv files are the popular HashKeeper format and the files I am parsing are the NSRL hash sets. http://www.nsrl.nist.gov/Downloads.htm#converter UPDATE: working solution below. Thanks everyone who commented! entries = csv.reader(open(options.entries, "rb")) stored_ids = dict((row[0],1) for row in entries if row[2] == options.vendor) hashes = csv.reader(open(options.hashes, "rb")) matches = dict((row[2], row[4]) for row in hashes if row[0] in stored_ids) for k, v in matches.iteritems(): print "%s,%s" % (k, v)

    Read the article

  • Makefile - Dependency generation

    - by Profetylen
    I am trying to create a makefile that automatically compiles and links my .cpp files into an executable via .o files. What I can't get working is automated (or even manual) dependency generation. When i uncomment the below commented code, nothing is recompiled when i run make build. All i get is make: Nothing to be done for 'build'., even if x.h (or any .h file) has changed. I've been trying to learn from this question: Makefile, header dependencies, dmckee's answer, especially. Why isn't this makefile working? Clarification: I can compile everything, but when I modify any header file, the .cpp files that depend on it aren't updated. So, if I for instance compile my entire source, then I change a #define in the header file, and then run make build, and I get Nothing to be done for 'build'. (when I have uncommented either commented chunks of the below code). CC=gcc CFLAGS=-O2 -Wall LDFLAGS=-lSDL -lstdc++ SOURCES=$(wildcard *.cpp) OBJECTS=$(patsubst %.cpp, obj/%.o,$(SOURCES)) TARGET=bin/test.bin # Nothing happens when i uncomment the following. (automated attempt) #depend: .depend # #.depend: $(SOURCES) # rm -f ./.depend # $(CC) $(CFLAGS) -MM $^ >> ./.depend; # #include .depend # And nothing happens when i uncomment the following. x.cpp and x.h are files in my project. (manual attempt) #x.o: x.cpp x.h clean: rm -f $(TARGET) rm -f $(OBJECTS) run: build ./$(TARGET) debug: build nm $(TARGET) gdb $(TARGET) build: $(TARGET) $(TARGET): $(OBJECTS) @mkdir -p $(@D) $(CC) $(LDFLAGS) $(OBJECTS) -o $@ obj/%.o: %.cpp @mkdir -p $(@D) $(CC) -c $(CFLAGS) $< -o $@ include $(DEPENDENCIES)

    Read the article

  • How can I write to a single xml file from two programs at the same time?

    - by Tom Bushell
    We recently started working with XML files, after many years of experience with the old INI files. My coworker found some CodeProject sample code that uses System.Xml.XmlDocument.Save. We are getting exceptions when two programs try to write to the same file at the same time. System.IO.IOException: The process cannot access the file 'C:\Test.xml' because it is being used by another process. This seems obvious in hindsight, but we had not anticipated it because accessing INI files via the Win32 API does not have this limitation. I assume there's some arbitration done by the Win32 calls that work at a higher level than the the XmlDocument.Save method. I'm hoping there are higher level XML routines somewhere in the .Net library that work similarily to the Win32 functions, but don't know where to start looking. Or maybe we can set up our file access permissions to allow multiple programs to write to the same file? Time is short (like almost all SW projects), and if we can't find a solution quickly, we'll have to hold our noses and go back to INI files.

    Read the article

  • Dividing web.config into multiple files in asp.net

    - by Jalpesh P. Vadgama
    When you are having different people working on one project remotely you will get some problem with web.config, as everybody was having different version of web.config. So at that time once you check in your web.config with your latest changes the other people have to get latest that web.config and made some specific changes as per their local environment. Most of people who have worked things from remotely has faced that problem. I think most common example would be connection string and app settings changes. For this kind of situation this will be a best solution. We can divide particular section of web.config into the multiple files. For example we could have separate ConnectionStrings.config file for connection strings and AppSettings.config file for app settings file. Most of people does not know that there is attribute called ‘configSource’ where we can  define the path of external config file and it will load that section from that external file. Just like below. <configuration> <appSettings configSource="AppSettings.config"/> <connectionStrings configSource="ConnectionStrings.config"/> </configuration> And you could have your ConnectionStrings.config file like following. <connectionStrings> <add name="DefaultConnection" connectionString="Data Source=(LocalDb)\v11.0;Initial Catalog=aspnet-WebApplication1-20120523114732;Integrated Security=True" providerName="System.Data.SqlClient" /> </connectionStrings> Same way you have another AppSettings.Config file like following. <appSettings> <add key="aspnet:UseTaskFriendlySynchronizationContext" value="true" /> <add key="ValidationSettings:UnobtrusiveValidationMode" value="WebForms" /> </appSettings> That's it. Hope you like this post. Stay tuned for more..

    Read the article

  • Importing PKCS#12 (.p12) files into Firefox From the Command Line

    - by user11165
    I’ve posted this question up on #Ubuntu and #Firefox Forums, and really could do with some help.. Anyone know where i could look or help with the answer. I’m hoping the power of social media will come through… I have a need to perform the following action: Firefox 3.6.x: Quote: open Edit - Preferences - Advanced - Encryption - View Certificates - Your Certificates - Import However i need the same functionality from the bash command line. So far I’ve established that the following command is supposed to be used: Quote: certutil -A -t “u,u,u” -d /home/df001/.mozilla/firefox/qe5y5lht.tc.default/ -n “mycert” -i client.p12 This executes with no isses, however, doesn’t show up in any Firefox Certificate store. However, I have noted that prior to running this command, i have a cert8.db key3.db and secmod.db file in the above folder. After running the command the certutil seems to have created a cert9.db, key4.db and pkcs12.txt file Listing the contents using the command: Quote: certutil -L -d sql:/home/df001/.mozilla/firefox/qe5y5lht.tc.default/ does seem to confirm my attempts of importing files into a certificate folder of some kind have worked. because i get Quote: Certificate Nickname Trust Attributes SSL,S/MIME,JAR/XPI Thawte SSL CA „ Go Daddy Secure Certification Authority „ Thawte SGC CA „ Entrust Certification Authority - L1C „ My Nero CT,C,c mynero P„ davidfield - Internet Widgits Pty Ltd u,u,u So, having tried this, and heading back over to the www, i cam across this command: Quote: pk12util -d /home/df001/.mozilla/firefox/qe5y5lht.tc.default/ -i client.p12 -n “David Field” -P “cert8.db” this again, appears to be importing something somewhere, however, again, Viewing certs from the Firefox interface doesn’t show the imported Cert. I’m surmising here on reading that the certutil and pk12util are creating a new NSS database, which firefox isn’t reading. So my question is, how can i get the p12 cert from the command line so it displays in the firefox Certificate manager interface? Why have i posted this here? Why not post on the firefox forum? Well i will copy and post the same question there as well, however the ability to use the command line to do this is important, as I have potentially 2000 machines which will need a user cert imported into firefox via a p12 file. I need to do this in the form of a script, i thought the hard part was going to be making the p12 file from the microsoft 2003 CA, turns out thats easy. I can’t just import via the GUI and copy over cert8.db x 2000, i can’t ask users to use the CA webinterface as its for VPN access, the users are off site, and they need the VPN to get to the cert server.. Is there any person out there who can help? By the way, i don't have the tor buttun installed.

    Read the article

  • Apache doesn't load .php files

    - by Haddex
    First, sorry for my English and asking something that it's quite answered all over the web. I've read a lot of post about this problem but I still can't find the solution. I'm a web developer who recently moved to Ubuntu from Windows 7. I had a website done (it's online and working) and I set up LAMP to keep working with it. I made a test.php file with: <?php phpinfo(); ?> and put it on /var/www/html directory, it shows all the information about the php and I was really happy: "Ok, it's all done, tomorrow I will work hard" But I placed my whole web into /var/www/html , not in a folder, the index.php is in /var/www/html but guess what: doesn't load any of my .php files, the browser just keep thinking. What I did: I rebooted Apache: /etc/init.d/apache2 restart I tried again with the test.php file and it works fine I put in /var/www/html a .html file and works fine. I looked for /etc/apache2/sites-enable/000-default.conf and it says: DocumentRoot /var/www/html I looked for /etc/apache2/mods-enabled/dir.conf and it says: DirectoryIndex index.html index.cgi index.pl index.php ... Edit* I think it's something related to phpmyadmin, like if I'm not able to connect with the database. But I got nothing on the screen when trying to load the page so...I'm not sure. I can access to the url localhost/phpmyadmin and I edited the connection.php file like this: <?php # FileName="Connection_php_mysql.htm" # Type="MYSQL" # HTTP="true" $hostname_rakstadconnection = "localhost"; $database_rakstadconnection = "rakstadclandb"; $username_rakstadconnection = "root"; $password_rakstadconnection = "admin"; $rakstadconnection = mysql_connect($hostname_rakstadconnection, $username_rakstadconnection, $password_rakstadconnection) or trigger_error(mysql_error(),E_USER_ERROR); mysql_query("SET NAMES 'utf8'"); ?> The name of the database is correct, like the user and password. http://i89.photobucket.com/albums/k220/Haddex/Capturadepantallade2014-06-09112609_zpsc45ddb72.png http://i89.photobucket.com/albums/k220/Haddex/Capturadepantallade2014-06-09112120_zps0b9e15f7.png *Edit2: could this be because it's a website that I brought to Linux from Windows? I used Dreamweaver. Edit3: I changed the # to /*/, nothing. The error.log file says: [Mon Jun 09 17:08:13.627881 2014] [:error] [pid 1517] [client 127.0.0.1:46663] PHP Warning: require_once(/var/www/html/Connections/rakstadconnection.php): failed to open stream: Permission denied in /var/www/html/index.php on line 1 [Mon Jun 09 17:08:13.627933 2014] [:error] [pid 1517] [client 127.0.0.1:46663] PHP Fatal error: require_once(): Failed opening required 'Connections/rakstadconnection.php' (include_path='.:/usr/share/php:/usr/share/pear') in /var/www/html/index.php on line 1 I'm reading error log but...should I add a linux path into a my index.php file? Don't think so. Thanks.

    Read the article

  • ZFS for Database Log Files

    - by user12620111
    I've been troubled by drop outs in CPU usage in my application server, characterized by the CPUs suddenly going from close to 90% CPU busy to almost completely CPU idle for a few seconds. Here is an example of a drop out as shown by a snippet of vmstat data taken while the application server is under a heavy workload. # vmstat 1  kthr      memory            page            disk          faults      cpu  r b w   swap  free  re  mf pi po fr de sr s3 s4 s5 s6   in   sy   cs us sy id  1 0 0 130160176 116381952 0 16 0 0 0 0  0  0  0  0  0 207377 117715 203884 70 21 9  12 0 0 130160160 116381936 0 25 0 0 0 0 0  0  0  0  0 200413 117162 197250 70 20 9  11 0 0 130160176 116381920 0 16 0 0 0 0 0  0  1  0  0 203150 119365 200249 72 21 7  8 0 0 130160176 116377808 0 19 0 0 0 0  0  0  0  0  0 169826 96144 165194 56 17 27  0 0 0 130160176 116377800 0 16 0 0 0 0  0  0  0  0  1 10245 9376 9164 2  1 97  0 0 0 130160176 116377792 0 16 0 0 0 0  0  0  0  0  2 15742 12401 14784 4 1 95  0 0 0 130160176 116377776 2 16 0 0 0 0  0  0  1  0  0 19972 17703 19612 6 2 92  14 0 0 130160176 116377696 0 16 0 0 0 0 0  0  0  0  0 202794 116793 199807 71 21 8  9 0 0 130160160 116373584 0 30 0 0 0 0  0  0 18  0  0 203123 117857 198825 69 20 11 This behavior occurred consistently while the application server was processing synthetic transactions: HTTP requests from JMeter running on an external machine. I explored many theories trying to explain the drop outs, including: Unexpected JMeter behavior Network contention Java Garbage Collection Application Server thread pool problems Connection pool problems Database transaction processing Database I/O contention Graphing the CPU %idle led to a breakthrough: Several of the drop outs were 30 seconds apart. With that insight, I went digging through the data again and looking for other outliers that were 30 seconds apart. In the database server statistics, I found spikes in the iostat "asvc_t" (average response time of disk transactions, in milliseconds) for the disk drive that was being used for the database log files. Here is an example:                     extended device statistics     r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 2053.6    0.0 8234.3  0.0  0.2    0.0    0.1   0  24 c3t60080E5...F4F6d0s0     0.0 2162.2    0.0 8652.8  0.0  0.3    0.0    0.1   0  28 c3t60080E5...F4F6d0s0     0.0 1102.5    0.0 10012.8  0.0  4.5    0.0    4.1   0  69 c3t60080E5...F4F6d0s0     0.0   74.0    0.0 7920.6  0.0 10.0    0.0  135.1   0 100 c3t60080E5...F4F6d0s0     0.0  568.7    0.0 6674.0  0.0  6.4    0.0   11.2   0  90 c3t60080E5...F4F6d0s0     0.0 1358.0    0.0 5456.0  0.0  0.6    0.0    0.4   0  55 c3t60080E5...F4F6d0s0     0.0 1314.3    0.0 5285.2  0.0  0.7    0.0    0.5   0  70 c3t60080E5...F4F6d0s0 Here is a little more information about my database configuration: The database and application server were running on two different SPARC servers. Storage for the database was on a storage array connected via 8 gigabit Fibre Channel Data storage and log file were on different physical disk drives Reliable low latency I/O is provided by battery backed NVRAM Highly available: Two Fibre Channel links accessed via MPxIO Two Mirrored cache controllers The log file physical disks were mirrored in the storage device Database log files on a ZFS Filesystem with cutting-edge technologies, such as copy-on-write and end-to-end checksumming Why would I be getting service time spikes in my high-end storage? First, I wanted to verify that the database log disk service time spikes aligned with the application server CPU drop outs, and they did: At first, I guessed that the disk service time spikes might be related to flushing the write through cache on the storage device, but I was unable to validate that theory. After searching the WWW for a while, I decided to try using a separate log device: # zpool add ZFS-db-41 log c3t60080E500017D55C000015C150A9F8A7d0 The ZFS log device is configured in a similar manner as described above: two physical disks mirrored in the storage array. This change to the database storage configuration eliminated the application server CPU drop outs: Here is the zpool configuration: # zpool status ZFS-db-41   pool: ZFS-db-41  state: ONLINE  scan: none requested config:         NAME                                     STATE         ZFS-db-41                                ONLINE           c3t60080E5...F4F6d0  ONLINE         logs           c3t60080E5...F8A7d0  ONLINE Now, the I/O spikes look like this:                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1053.5    0.0 4234.1  0.0  0.8    0.0    0.7   0  75 c3t60080E5...F8A7d0s0                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1131.8    0.0 4555.3  0.0  0.8    0.0    0.7   0  76 c3t60080E5...F8A7d0s0                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1167.6    0.0 4682.2  0.0  0.7    0.0    0.6   0  74 c3t60080E5...F8A7d0s0     0.0  162.2    0.0 19153.9  0.0  0.7    0.0    4.2   0  12 c3t60080E5...F4F6d0s0                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1247.2    0.0 4992.6  0.0  0.7    0.0    0.6   0  71 c3t60080E5...F8A7d0s0     0.0   41.0    0.0   70.0  0.0  0.1    0.0    1.6   0   2 c3t60080E5...F4F6d0s0                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1241.3    0.0 4989.3  0.0  0.8    0.0    0.6   0  75 c3t60080E5...F8A7d0s0                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1193.2    0.0 4772.9  0.0  0.7    0.0    0.6   0  71 c3t60080E5...F8A7d0s0 We can see the steady flow of 4k writes to the ZIL device from O_SYNC database log file writes. The spikes are from flushing the transaction group. Like almost all problems that I run into, once I thoroughly understand the problem, I find that other people have documented similar experiences. Thanks to all of you who have documented alternative approaches. Saved for another day: now that the problem is obvious, I should try "zfs:zfs_immediate_write_sz" as recommended in the ZFS Evil Tuning Guide. References: The ZFS Intent Log Solaris ZFS, Synchronous Writes and the ZIL Explained ZFS Evil Tuning Guide: Cache Flushes ZFS Evil Tuning Guide: Tuning ZFS for Database Performance

    Read the article

  • Should I add the vcxproj.filter files to source control

    - by jschroedl
    While evaluating Visual Studio 2010 Beta 2, I see that in the converted directory my vcproj files have become vcxproj files. There are also vcxproj.filter files alongside each project which appear to contain a description of the folder structure (\Source Files, \Header Files, etc.). Do you think these filter files should be kept per-user or should the be shared across the whole dev group and checked into SCC? My current thinking is to check them in but wondered if there's any reasons not to do that or perhaps good reasons that I should definitely check them in. The obvious benefit is that the folder structures will match if I'm looking at someone else's machine but maybe they'd like to reorganize things logically??

    Read the article

  • Open excel 2007 excel files and save as 97-2003 formats in VBA

    - by ABB
    I have a weird situation where I have a set of excel files, all having the extension .xls., in a directory where I can open all of them just fine in Excel 2007. The odd thing is that I cannot open them in Excel 2003, on the same machine, without opening the file first in 2007 and going and saving the file as an "Excel 97-2003 Workbook". Before I save the file as an "Excel 97-2003 Workbook" from Excel 2007, when I open the excel files in 2003 I get the error that the file is not in a recognizable format. So my question is: if I already have the excel file opened in 2007 and I already have the file name of the open file stored in a variable, programatically how can I mimic the action of going up to the "office button" in the upper right and selecting, "save as" and then selecting "Excel 97-2003 Workbook"? I've tried something like the below but it does not save the file at all: ActiveWorkbook.SaveAs TempFilePath & TempFileName & ".xls", FileFormat:=56 Thanks for any help or guidance!

    Read the article

  • Fancybox can't open partial view

    - by 1110
    I want to open partial view in fancybox like a modal view but when I click on the link it opens whole new page but it should open that in fancybox. I don't know if this is important but I have one more function that open images in fancybox (on the same page) and it works. Class names are different @Html.ActionLink("Feedback", "New", "Feedback", null, new { @class = "lightbox" }) JS $('.lightbox').fancybox(); Action method public ActionResult New() { return PartialView(); }

    Read the article

  • Visual Studio 2010 Beta start without debugging Console wont stay open

    - by CousinVinny
    I want to keep the console (output) window open to view the output of the project I am working on. I have enabled the start without debugging option and added the button to the debugging toolbar, but alas it fails to keep it open nicely like in visual studio 2008. I have to add cin.getline() etc etc etc to get it to stay open, but I don't want to type it. Any suggestions as to alter it Visual Studio 2010 to keep it open, or any debugging tricks to make it easier to view output for longer than a flash. Visual Studio used to have a "Press any key to continue prompt" I want it back...

    Read the article

  • How to find out where or if MYSQL5 logs are stored on a machine WHM/Cpanel

    - by moi
    I have a WHM/Cpanel re-seller hosting account on a virtual private server (Linux). I have root access to the machine via SSH I am trying to locate a file that contains information that will help me to determine which users have accessed what db and from which hosts. I would imagine this kind of data is stored in a log file somewhere. The MySQL page says: The general query log - Established client connections and statements received from clients See: http://dev.mysql.com/doc/refman/5.0/en/server-logs.html It also says: By default, all log files are created in the mysqld data directory. So, I am am NOT asking where are the general query log logs stored, (cos I expect I will get answers saying "it depends") Please help me work out: "How can go about finding out where MySQL general query log logs are stored on a linux machine" Couple of things i've already tried: I looked at /etc/my.cnf it was a tiny file that only contained the following info: [mysqld] skip-bdb skip-innodb set-variable = max_connections=500 safe-show-database ~ ~ I have looked in: /var/lib/mysql/ But I could not see any log-like file names in that directory. Any clues on this would be most welcome.

    Read the article

  • Windows batch file to delete .svn files and folders

    - by Marco Demaio
    Hi,in order to delete all ".svn" files/folders/subfolders in "myfolder" I use this simple line in a batch file: FOR /R myfolder %%X IN (.svn) DO (RD /S /Q "%%X") This works, but if there are no ".svn" files/folders the batch file shows a warning saying: "The system cannot find the file specified." This warning is very noisy so I was wondering how to make it understand that if it doesn't find any ".svn" files/folders he must skip the RD command. Usually using wild cards would suffice, but in this case I don't know how to use them, because I don't want to delete files/folders with .svn extension, but I want to delete the files/folders named exactly ".svn", so if I do this: FOR /R myfolder %%X IN (*.svn) DO (RD /S /Q "%%X") it would NOT delete files/folders named extaclty ".svn" anymore. I tried also this: FOR /R myfolder %%X IN (.sv*) DO (RD /S /Q "%%X") but it doesn't work either, he deletes nothing. Thanks for any help!

    Read the article

  • TicTac Photo and Windows 7

    - by Ben
    Hello, My wife has been creating a tictac photo album. I had to upgrade to windows 7 as i had enough of Vista so i backed up the tic tac photo file and the photos to an external hard disk and performed a fresh install of win7. Now here is the problem. TicTacPhoto says it can find the photos in the album. The locations were as follows: Vista: C:\Users\Kelly\Pictures Win 7 C:\Users\Kelly\My Pictures When i try to create a Pictures folder under Kelly it popups a message about merging the two folders and simply moves the pictures to the My Pictures folder. Does anyone know a way to make a foler called pictures so i can eliminate the file path problem and then try again with tic tac photo support to get them to fix my file. My wife is going to kill me as its our wedding album and she has spent upwards of 30hrs designing it and me upgrading to win 7 means its all my fault. She does not understand file paths etc. Im going to try and open the album file in a text editor and see if i can see anything but thought i would ask here as well. Any help appreciated.

    Read the article

  • How to sign installation files of a Visual Studio .msi

    - by Alex
    This may be a duplicate, though I can't find it at this time. If so please point me in the right direction. I recently purchased an authenticode certificate from globalsign and am having problems signing my files for deployment. There are a couple of .exe files that are generated by a project and then put into a .msi. When I sign the .exe files with the signtool the certificate is valid and they run fine. The problem is that when I build the .msi (using the visual studio setup project) the .exe files loose their signatures. So I can sign the .msi after it is built, but the installed .exe files continue the whole "unknown publisher" business. How can I retain the signature on these files for installation on the client machine? You help is appreciated. -Alex

    Read the article

  • Eclipse CDT: Import source / header files into my new project, without duplicating them

    - by Tom
    Hi all, Im sure there is a very simple solution for this. I have a bunch of .cpp / .h files from a project, say in directory ~/files On the other hand, I want to create a c++ project using eclipse to work on those files, so I put my workspace on ~/wherever. Then I create a c++ project: ~/wherever/project, and include the source files (located in /~files). The problem i'm having is that files are now duplicated in ~/wherever/project, and I would like to avoid that, specially so I know which copy of the file to commit. Is this possible? Im sure it is, but cant get it. Thanks in advance.

    Read the article

< Previous Page | 127 128 129 130 131 132 133 134 135 136 137 138  | Next Page >