Search Results

Search found 39577 results on 1584 pages for 'temp files'.

Page 66/1584 | < Previous Page | 62 63 64 65 66 67 68 69 70 71 72 73  | Next Page >

  • Script to recursively grep data from certain files in the directory

    - by Jude
    I am making a simple shell script which will minimize the time I spend in searching all directories under a parent directory and grep some things inside some files. Here's my script. #!/bin/sh MainDir=/var/opt/database/1227-1239/ cd "$MainDir" for dir in $(ls); do grep -i "STAGE,te_start_seq Starting" "$dir"/his_file | tail -1 >> /home/xtee/sst-logs.out if [ -f "$dir"/sysconfig.out]; then grep -A 1 "Drive Model" "$dir"/sysconfig.out | tail -1 >> /home/xtee/sst-logs.out else grep -m 1 "Physical memory size" "$dir"/node0/setupsys.out | tail -1 >> /home/xtee/sst-logs.out fi done The script is supposed to grep the string STAGE,te_start_seq Starting under the file his_file then dump it sst-logs.out which it does. My problem though is the part in the if statement. The script should check the current directory for sysconfig.out, grep drive model and dump it to sst-logs.out if it exists, otherwise, change directory to node0 then grep physical memory size from setupsys.out and dump it to sst-logs.out. My problem is, it seems the if then else statement seems not to work as it doesn`t dump any data at all but if i execute grep manually, i do have data. What is wrong with my shell script? Is there any more efficient way in doing this?

    Read the article

  • Dreamweaver CS5 Test server works but cannot connect to host server through files window

    - by Toni
    I've been managing this site for a long time and update coupons on it approximately every 60 days. For some reason, I'm not having problems: I opened DW CS5 today and made the changes necessary to update coupons. I was able to connect to the host server with no problem but most of my coupon images were not showing up. DW tells me I have 70 broken links, which can't be the case because I've reviewed them. Some links work and are the same as the broken links other than the file name. Unable to figure it out, I thought maybe restarting my Mac would help. However, upon logging back into DW, I am now unable to connect to the host server. I get an FTP error notice that the file doesn't exist or there is a permissions problem. Funny thing is, I can connect successfully if I test the connection through the Site Management window. I have connected to my host server through FileZilla and see all the files there, unfortunately, I still can't get the web pages to display the coupons. Has anyone else had this issue and if so, what is the solution? I feel like this is probably a simple fix, but I cannot for the life of me determine what it is! If anyone knows a solution, I'd really appreciate the help! -Toni

    Read the article

  • I have deleted python files in usr/bin and cant reinstall it

    - by Plonkaa
    I am a novice at Ubuntu and unfortunately i have deleted 3 files in the usr/bin folder python 2.7 python python 2.6 Now my update manager wont work and when i type in python into gnome it says that it is no longer there. Please help me ive tried loads of different things but it just wont work. The closest i got was the following: I typed in sudo apt-get -f install and i thought i had fixed it but then i got a error message - Reading package lists... Done Building dependency tree Reading state information... Done The following packages were automatically installed and are no longer required: gir1.2-folks-0.6 gir1.2-polkit-1.0 libcogl5 mutter-common gir1.2-json-1.0 libcaribou0 gir1.2-accountsservice-1.0 gir1.2-clutter-1.0 gir1.2-gkbd-3.0 gir1.2-networkmanager-1.0 caribou libcogl-common libmutter0 gir1.2-mutter-3.0 gjs gir1.2-caribou-1.0 libclutter-1.0-0 gir1.2-telepathylogger-0.2 libclutter-1.0-common cups-pk-helper gir1.2-upowerglib-1.0 gir1.2-cogl-1.0 libmozjs185-1.0 gir1.2-telepathyglib-0.12 gir1.2-gee-1.0 libgjs0c gnome-shell-common Use 'apt-get autoremove' to remove them. The following extra packages will be installed: ubuntu-sso-client The following packages will be upgraded: ubuntu-sso-client 1 upgraded, 0 newly installed, 0 to remove and 35 not upgraded. 2 not fully installed or removed. Need to get 0 B/57.7 kB of archives. After this operation, 16.4 kB of additional disk space will be used. Do you want to continue [Y/n]? y Setting up python-minimal (2.7.2-7ubuntu2) ... /var/lib/dpkg/info/python-minimal.postinst: 4: python2.7: not found dpkg: error processing python-minimal (--configure): subprocess installed post-installation script returned error exit status 127 Errors were encountered while processing: python-minimal E: Sub-process /usr/bin/dpkg returned an error code (1) any advice is appreciated!

    Read the article

  • How can I "diff" two files with Nautilus?

    - by bioShark
    I have installed Meld and found out it's a great comparing tool. Unfortunately there is no integration with Nautilus 3.2. This means, I can't right click on files and select an option to open them in Meld for comparison. I have seen in the tools comment that the tool need the diff-ext package to be installed. This package has been removed from Ubuntu universe, I am guessing because gtk 3.0. Even if I manually downloaded from source forge the diff-ext package, when I try to configure it the check fails with the message: checking for DIFF_EXT... configure: error: Package requirements (libnautilus-extension >= 2.14.0 gconf-2.0 >= 2.14.0 gnome-vfs-module-2.0 >= 2.14) were not met: No package 'libnautilus-extension' found No package 'gconf-2.0' found No package 'gnome-vfs-module-2.0' found Ok, so from this output I gather that indeed gtk 2 is being required to install the diff extension to nautilus. Now, my question is: Is there a possibility to integrate Meld into Nautilus? Or, are there any other diff based tool which integrate with current Nautilus? So gtk3 based. I am using Ubuntu 11.10 if there was any doubt so far. cheers and thanks in advance.

    Read the article

  • Impossible to select folders and files with mouse (Ubuntu 12.04)

    - by François
    First-time post for me here (after being a regular reader for two years though) so thank you all for the quality of replies and help provided. My problem is very simple apparently but a tricky one. I just installed the Ubuntu 12.04(1) along with the Gnome3-shell environment on my new pc desktop Acer Aspire X3995 (see config below). Everything work (more or less) so far (I still have problems of sound and disabled 2-fingers gestures with my screen -- which I will have to deal with xconfig settings I think -- though), but the main problem is that I cannot select files/folders with my USB mouse. When I try to double click on them, nothing happen (sometimes one folder or file is selected but then unselected again). Note that the navigation works perfectly from the USB keyboard and from the touch-screen (I am using a 23" wide touch-screen Acer Monitor T231Hbmid). Also, the mouse works perfectly with other menu navigation, with the only difference that the text of certain menus is selected as if I was holding the left click on them. So I assume the problem is only related to the mouse. Needless to say that the usual basic hardware checks have been performed (unplugging, powered-off, etc.). My level is simply "advanced user", meaning that if you provide me with intelligible input I should find my way, but please don't expect too much technical/specific knowledge... :) Please let me know if you need more information on this bug. Now, fingers crossed... and thanks in advance! Ciao, François Config of Acer Aspire X3995: Ubuntu 12.04 / Gnome3-shell environment / Intel Core i5 3450 / nVidia GeForce 605, 1Gb. Screen: Acer Monitor TFT 23" wide T231Hbmid

    Read the article

  • Finding header files

    - by rwallace
    A C or C++ compiler looks for header files using a strict set of rules: relative to the directory of the including file (if "" was used), then along the specified and default include paths, fail if still not found. An ancillary tool such as a code analyzer (which I'm currently working on) has different requirements: it may for a number of reasons not have the benefit of the setup performed by a complex build process, and have to make the best of what it is given. In other words, it may find a header file not present in the include paths it knows, and have to take its best shot at finding the file itself. I'm currently thinking of using the following algorithm: Start in the directory of the including file. Is the header file found in the current directory or any subdirectory thereof? If so, done. If we are at the root directory, the file doesn't seem to be present on this machine, so skip it. Otherwise move to the parent of the current directory and go to step 2. Is this the best algorithm to use? In particular, does anyone know of any case where a different algorithm would work better?

    Read the article

  • Launching LibreOffice Files - Unusual Window Behaviour

    - by Mike
    Ubuntu 11.10 Unity. If I launch LibreOffice Files (ODS, ODT, ODP) with a double click in their home folders they launch with the appropriate application (Calc, Writer, Impress) however the application windows do not display the usual Close, Minimize, Restore buttons in the left hand corner. There is just a blank space where they should be. In the Unity launcher on the left there is not the expected white arrow next to the launcher icon and the open app is not seen by the ALT+TAB keystroke when you have multiple windows open. For example: if you have an app open in this way and say Firefox, if you minimize Firefox you are only left with the desktop and you have no way to locate the open LibreOffice app. Clicking the app's icon in the launcher simply opens another instance. A messy workaround is to click [Firefox] to a restored window and the LibreOffice app can be seen behind it allowing a mouse click to bring it forward. If I open the LibreOffice app from the launcher and then Open a file from its menu, it all behaves as expected. I don't find this as convenient; anyone know how to fix the bad behaviour?

    Read the article

  • TraceTune: Larger Files and History

    - by Bill Graziano
    I updated TraceTune over the weekend.  I increased the trace file upload size to 20MB.  We’ve processed over half a million rows of trace data so far and I’m confident this won’t kill the server. I added average CPU and average disk reads to the screen that lists the SQL statements in a trace file. I only added these two.  I’m pretty sure average writes isn’t that import.  I’m still thinking about average duration.  I’m trying to balance showing you what you need with a clean, simple interface.  Plus I have a way to see the averages that I describe further down. TraceTune now keeps the last 10 files that you’ve uploaded and will give you some basic details about each file. I think the last change I made is the most interesting. For each SQL statement, I show the history of that statement. You’ll see each trace file where this statement was found.  It will list the averages for CPU, reads, writes and duration.  This will quickly show you if you’re improving the performance of that query.  In my screen shot above you can that even though the execution counts are very different the averages are consistent. If you want to see what queries are consuming the most resources on your server give TraceTune a try.

    Read the article

  • accessing live usb files from new hd ubuntu install

    - by Robin Bailey
    After my live USB (ubuntu 12.04 lts) refused to boot, I proceeded to install the same Ubuntu version on the laptop hard drive (a dual boot next to Win xp). This all went well without a hitch. Previous to this, I spent several weeks enjoying and exploring ubuntu from the usb pendrive. During this time I changed lots of settings and customized Firefox and more. Now, I'd like to import the home folder from the usb drive into the new install home folder on the hard disk, which is the purported folder that holds all those special settings to my knowledge. Unfortunately and only being familiar with Windows file systems, the view of the usb file system from the new hdd install is totally perplexing. I can't find anything that looks anywhere close to the original file system. More, I can't find any of the files I had created and stored there, like the LibreOfficeCalc file that has all my passwords (this one is really discouraging) that was stored on the ubuntu desktop. Help me find this file alone and I'll bow down with full apologies to any and all computer gods. Being able to import all those customizing settings into the new install would be a major bonus also, but hey, I'm not greedy. I'll take the passwords file and be happy! And humble! I would be very grateful for some clear, understandable help on this. Thanks

    Read the article

  • Eliminating zero-length files

    - by RhZ
    I have been having multiple crashes recently. 4-5 last night within a few hours. I posted about it before, and got an answer but not sure how to proceed. The messages in my logs right before the crash are multiple complaints about valid eCryptfs headers. But the chron might not be related, I don't think I saw that in previous crashes: xxx-desktop kernel: [ 1112.274474] Valid eCryptfs headers not found in file header region or xattr region, inode 32376924 xxx-desktop CRON[4212]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) So I was sent to an answer providing this script: for i in find $(mount | grep " on $HOME type ecryptfs" | awk '{print $1}') -size 0c; do if ! fuser -v $i; then rm -f $i fi done I did find some zero byte files, not in the exactly right place (a folder called .private as I remember), but I need to fix this, its too bad right now. So I need to delete any of them that are not in use. I am a little too clueless, can someone walk me through executing this script? I don't know how.

    Read the article

  • Ubuntu automatic logout whenever I execute exe files

    - by KeepTrying
    I have a problem. Here's the thing. There were 4 partitions in my hard drive: One for ubuntu root folder One for ubuntu home folder One for general stuffs like music, movies... And the last one for SWAP To install Windows 7, I resized partitions and moving the order of partitions by using GParted. I moved all of the ext formatted partitions to the left, so that means the spare space would be at the right. And I formatted that spare space in NTFS and install windows 7. After successfully installing windows 7, I used LiveUSB to fix grub. I installed Boot Repair and, with just one click, now I can dual boot ubuntu and windows 7. But, the point, because of changing the order of partitions, especially the partition consisting of home folder, I couldn't log in the ubuntu. I used recovery mode and changed file /etc/passwd. Everything almost got back to normal except one thing. The windows apps that I installed via wine don't work anymore. I run them via accessing menu Applications/Wine/Programs but nothing loads. One more thing, when I double click on exe files to run them, ubuntu suddenly log outs. Thank you for reading my post, it's quite long and my English is fairly poor. I'd appreciate for anyone who reads it.

    Read the article

  • I deleted all files and folders (including hidden) from /home/username/ now in big trouble

    - by jeffery_the_wind
    I am logged into a remote ubuntu server, and I accidentally erased the entire /home/username/ directory for the current user. The only thing left is a hidden directory called .gvfs. I don't need anything of the Documents/Music/etc. Now it is not letting me cd into the /var/www/ directory, which has permissions 666 and it is owned by the current user. I am afraid to disconnect from my ssh session because I don't know if I will be able to get back on. Have I permanently created a problem? Is there a way I can replace the most important files to the /home/username/ directory? Thanks! ** EDIT ** Thanks everyone for the help. I figured the problem with cd into the /var/www/ was actually my permissions in the /var/www/ directory. It was set to 666, changed it to 755 and everything was good again. It doesn't look like anything systematic was ruined by deleting the contents of the user folder.

    Read the article

  • C++ Database vs Reading Files

    - by Ohmages
    Ive been programing a C++ game/server for the past year. I have been using MYSQL for character logins, items, monsters, etc, etc. (im on windows). My question is, what are some of the databases that some big time developers use. IE. Battle.net, Diablo II, Diablo III, mythos, hellgate , etc, etc, etc. Do they have their own database they built? Or do they use an existing framework for logins, and character transfers. I do know that in diablo II, they use character files to to transfer characters into the game world. But what about the login into battle.net. Would it be wiser for me to stick with MYSQL, or is there something out there faster and more stable, or should I create a login type of system that looks through a file to see if you provided the correct password. Can't wait to get some replies. Thanks! PS. Currently the framework is much like battle.net, where you login into a lobby, create, and join games. The game server/lobby server are different servers too. So im just wondering about the lobby server for logins because I'm expecting several hundred thousand connections/logins.

    Read the article

  • Javascript object dependencies

    - by Anurag
    In complex client side projects, the number of Javascript files can get very large. However, for performance reasons it's good to concatenate these files, and compress the resulting file for sending over the wire. I am having problems in concatenating these as the dependencies are included after they are needed in some cases. For instance, there are 2 files: /modules/Module.js <requires Core.js> /modules/core/Core.js The directories are recursively traversed, and Module.js gets included before Core.js, which causes errors. This is just a simple example where dependencies could span across directories, and there could be other complex cases. There are no circular dependencies though. The Javascript structure I follow is similar to Java packages, where each file defines a single Object (I'm using MooTools, but that's irrelevant). The structure of each javascript file and the dependencies is always consistent: Module.js var Module = new Class({ Implements: Core, ... }); Core.js var Core = new Class({ ... }); What practices do you usually follow to handle dependencies in projects where the number of Javascript files is huge, and there are inter-file dependencies?

    Read the article

  • System.InvalidOperationException: Failed to map the path '/sharedDrive/Public'

    - by d03boy
    I'm trying to set up a page that will allow users to download files from a shared drive (where the file is actually sent via the page). So far this is what I have. public partial class TestPage : System.Web.UI.Page { protected DirectoryInfo dir; protected FileInfo[] files; protected void Page_Load(object sender, EventArgs e) { dir = new DirectoryInfo(Server.MapPath(@"\\sharedDrive\Public")); files = dir.GetFiles(); } } The aspx page looks kind of like this: <% Response.Write(System.Security.Principal.WindowsIdentity.GetCurrent().Name); %> <ul> <% foreach (System.IO.FileInfo f in files) { Response.Write("<li>" + f.FullName + "</li>"); } %> </ul> When I remove the erroneous parts of the code, the page tells me that the Windows Identity I'm using is my user (which has access to the drive). I don't understand what the problem could be or what it's even complaining about.

    Read the article

  • Configure IIS7 to server static content through ASP.NET Runtime

    - by Anton Gogolev
    I searched high an low and still cannot find a definite answer. How do I configure IIS 7.0 or a Web Application in IIS so that ASP.NET Runtime will handle all requests -- including ones to static files like *.js, *.gif, etc? What I'm trying to do is as follows. We have kind of SaaSy site, which we can "skin" for every customer. "Skinnig" means developing a custom master page and using a bunch of *.css and other images. Quite naturally, I'm using VirtualPathProvider, which operates like this: public override System.Web.Hosting.VirtualFile GetFile(string virtualPath) { if(PhysicalFileExists(virtualPath)) { var virtualFile = base.GetFile(virtualPath); return virtualFile; } if(VirtualFileExists(virtualPath)) { var brandedVirtualPath = GetBrandedVirtualPath(virtualPath); var absolutePath = HttpContext.Current.Server.MapPath(brandedVirtualPath); Trace.WriteLine(string.Format("Serving '{0}' from '{1}'", brandedVirtualPath, absolutePath), "BrandingAwareVirtualPathProvider"); var virtualFile = new VirtualFile(brandedVirtualPath, absolutePath); return virtualFile; } return null; } The basic idea is as follows: we have a branding folder inside our webapp, which in turn contains folders for each "brand", with "brand" being equal to host name. That is, requests to http://foo.example.com/ should use static files from branding/foo_example_com, whereas http://bar.example.com/ should use content from branding/bar_example_com. Now what I want IIS to do is to forward all requests to static files to StaticFileHandler, which would then use this whole "infrastructure" and serve correct files. However, try as I might, I cannot configure IIS to do this.

    Read the article

  • Recursive Perl detail need help

    - by Catarrunas
    Hi everybody, i think this is a simple problem, but i'm stuck with it for some time now! I need a fresh pair of eyes on this. The thing is i have this code in perl: #!c:/Perl/bin/perl use CGI qw/param/; use URI::Escape; print "Content-type: text/html\n\n"; my $directory = param ('directory'); $directory = uri_unescape ($directory); my @contents; readDir($directory); foreach (@contents) { print "$_\n"; } #------------------------------------------------------------------------ sub readDir(){ my $dir = shift; opendir(DIR, $dir) or die $!; while (my $file = readdir(DIR)) { next if ($file =~ m/^\./); if(-d $dir.$file) { #print $dir.$file. " ----- DIR\n"; readDir($dir.$file); } push @contents, ($dir . $file); } closedir(DIR); } I've tried to make it recursive. I need to have all the files of all of the directories and subdirectories, with the full path, so that i can open the files in the future. But my output only returns the files in the current directory and the files in the first directory that it finds. If i have 3 folders inside the directory it only shows the first one. Ex. of cmd call: "perl readDir.pl directory=C:/PerlTest/" Thanks

    Read the article

  • Makefile - Dependency generation

    - by Profetylen
    I am trying to create a makefile that automatically compiles and links my .cpp files into an executable via .o files. What I can't get working is automated (or even manual) dependency generation. When i uncomment the below commented code, nothing is recompiled when i run make build. All i get is make: Nothing to be done for 'build'., even if x.h (or any .h file) has changed. I've been trying to learn from this question: Makefile, header dependencies, dmckee's answer, especially. Why isn't this makefile working? Clarification: I can compile everything, but when I modify any header file, the .cpp files that depend on it aren't updated. So, if I for instance compile my entire source, then I change a #define in the header file, and then run make build, and I get Nothing to be done for 'build'. (when I have uncommented either commented chunks of the below code). CC=gcc CFLAGS=-O2 -Wall LDFLAGS=-lSDL -lstdc++ SOURCES=$(wildcard *.cpp) OBJECTS=$(patsubst %.cpp, obj/%.o,$(SOURCES)) TARGET=bin/test.bin # Nothing happens when i uncomment the following. (automated attempt) #depend: .depend # #.depend: $(SOURCES) # rm -f ./.depend # $(CC) $(CFLAGS) -MM $^ >> ./.depend; # #include .depend # And nothing happens when i uncomment the following. x.cpp and x.h are files in my project. (manual attempt) #x.o: x.cpp x.h clean: rm -f $(TARGET) rm -f $(OBJECTS) run: build ./$(TARGET) debug: build nm $(TARGET) gdb $(TARGET) build: $(TARGET) $(TARGET): $(OBJECTS) @mkdir -p $(@D) $(CC) $(LDFLAGS) $(OBJECTS) -o $@ obj/%.o: %.cpp @mkdir -p $(@D) $(CC) -c $(CFLAGS) $< -o $@ include $(DEPENDENCIES)

    Read the article

  • How can I write to a single xml file from two programs at the same time?

    - by Tom Bushell
    We recently started working with XML files, after many years of experience with the old INI files. My coworker found some CodeProject sample code that uses System.Xml.XmlDocument.Save. We are getting exceptions when two programs try to write to the same file at the same time. System.IO.IOException: The process cannot access the file 'C:\Test.xml' because it is being used by another process. This seems obvious in hindsight, but we had not anticipated it because accessing INI files via the Win32 API does not have this limitation. I assume there's some arbitration done by the Win32 calls that work at a higher level than the the XmlDocument.Save method. I'm hoping there are higher level XML routines somewhere in the .Net library that work similarily to the Win32 functions, but don't know where to start looking. Or maybe we can set up our file access permissions to allow multiple programs to write to the same file? Time is short (like almost all SW projects), and if we can't find a solution quickly, we'll have to hold our noses and go back to INI files.

    Read the article

  • Dividing web.config into multiple files in asp.net

    - by Jalpesh P. Vadgama
    When you are having different people working on one project remotely you will get some problem with web.config, as everybody was having different version of web.config. So at that time once you check in your web.config with your latest changes the other people have to get latest that web.config and made some specific changes as per their local environment. Most of people who have worked things from remotely has faced that problem. I think most common example would be connection string and app settings changes. For this kind of situation this will be a best solution. We can divide particular section of web.config into the multiple files. For example we could have separate ConnectionStrings.config file for connection strings and AppSettings.config file for app settings file. Most of people does not know that there is attribute called ‘configSource’ where we can  define the path of external config file and it will load that section from that external file. Just like below. <configuration> <appSettings configSource="AppSettings.config"/> <connectionStrings configSource="ConnectionStrings.config"/> </configuration> And you could have your ConnectionStrings.config file like following. <connectionStrings> <add name="DefaultConnection" connectionString="Data Source=(LocalDb)\v11.0;Initial Catalog=aspnet-WebApplication1-20120523114732;Integrated Security=True" providerName="System.Data.SqlClient" /> </connectionStrings> Same way you have another AppSettings.Config file like following. <appSettings> <add key="aspnet:UseTaskFriendlySynchronizationContext" value="true" /> <add key="ValidationSettings:UnobtrusiveValidationMode" value="WebForms" /> </appSettings> That's it. Hope you like this post. Stay tuned for more..

    Read the article

  • My Ubuntu Touch seems to be broken no matter how many different files I try

    - by zeokila
    So I'm planning on testing out Ubuntu Touch, and developing some applications for it so I thought I would flash it to my Nexus 4 that was already unlocked, and running Paranoid Android and the kernel associated. I headed to Ubuntu's website, browsed around and came across this page: Touch/Install - Ubuntu Wiki I followed Step 1 perfectly, word by word. Its seems to me that everything on that part is fine. I skipped Step 2 having already done that for Paranoid Android, and then I follow 3 and 4 word to word also. Using the command phablet-flash -b everything seemed to be fine. So it booted up and all seemed normal, but it wasn't. Here are some major bugs that only seem to happen to me. So I was greeted with a normal lock screen: But one of the first noticable things on the home screen is that I only have 4 tabs, not 5: Some of the applications that are supposed to work do not (I know some are dummies) like here with the calculator, this is what I get: On the homescreen it is a black box, it will crash a couple of seconds later: Another annoying problem is that when I want to close apps, I have from right to left, if I close an app on the left first it will close it, but it will open the app to it's right, weird: Yet another bug, this one is in the pull down drawer, when you just click on it, you can see that not all the icons are there: Pretty much everything else works as it should, but my main problem is that there is no telephony, I'm not sure how it works exactly, but I'm never asked a SIM code (I'm guessing you need that?), I can't compose SMS's and can't dial numbers, it won't let me select the 'Send' or 'Call' button. I've after tried manual installs all over the place with these files: http://cdimage.ubuntu.com/ubuntu-touch-preview/daily-preinstalled/current/saucy-preinstalled-armel+mako.zip (Says 44MB on site - 46.6MB on my laptop) http://cdimage.ubuntu.com/ubuntu-touch-preview/daily-preinstalled/current/saucy-preinstalled-phablet-armhf.zip (Says 366MB on site - 383.2MB on my laptop) There are some weird size differences between what the site told me and what I downloaded, but I've tried re-downloading just to end up with the same file. And it just alway ends up with the same problems. No telephony and those weird bugs. So my question is, how the hell can I get the same version as everyone else, with the ability to send texts and call and open the calculator and more? Also, definitely running saucy: And maybe useful? This is what's in the device info:

    Read the article

  • dpkg stuck downloading font files

    - by Bob Bowles
    I have been reinstalling Ubuntu 12.04. The install from USB works fine, and I could update everything OK, but when I got to re-installing my application software I hit a snag. One of the packages I tried to re-install was ttf-mscorefonts-installer. dpkg stalled during this setup, downloading a font file (it had tried to download it all night). I stopped dpkg, and attempted to re-start downloading something else, but it would not let me. The commands I typed are as follows: bob@bobStudio:~$ sudo rm /var/lib/dpkg/lock This unlocks dpkg, but if I try to do something I get the following message (eg): bob@bobStudio:~$ sudo apt-get install synaptic E: dpgk was interrupted, you must manually run 'sudo dpkg --configure -a' to correct the problem So, I did just that: bob@bobStudio:~$ sudo dpkg --configure -a whereupon it started the previously failed download all over again. I went round the loop here a few times and each time after the configure command it re-started the failing download, but then I got this: bob@bobStudio:~$ sudo dpkg --configure -a Setting up update-notifier-common (0.119ubuntu8.4) ... ttf-mscorefonts-installer: downloading http://downloads.sourceforge.net/corefonts/andale32.exe Traceback (most recent call last): File "/usr/lib/update-notifier/package-data-downloader", line 234, in process_download_requests dest_file = urllib.urlretrieve(files[i])[0] File "/usr/lib/python2.7/urllib.py", line 93, in urlretrieve return _urlopener.retrieve(url, filename, reporthook, data) File "/usr/lib/python2.7/urllib.py", line 239, in retrieve fp = self.open(url, data) File "/usr/lib/python2.7/urllib.py", line 207, in open return getattr(self, name)(url) File "/usr/lib/python2.7/urllib.py", line 344, in open_http h.endheaders(data) File "/usr/lib/python2.7/httplib.py", line 954, in endheaders self._send_output(message_body) File "/usr/lib/python2.7/httplib.py", line 814, in _send_output self.send(msg) File "/usr/lib/python2.7/httplib.py", line 776, in send self.connect() File "/usr/lib/python2.7/httplib.py", line 757, in connect self.timeout, self.source_address) File "/usr/lib/python2.7/socket.py", line 553, in create_connection for res in getaddrinfo(host, port, 0, SOCK_STREAM): IOError: [Errno socket error] [Errno -2] Name or service not known Setting up ttf-mscorefonts-installer (3.4ubuntu3) ... bob@bobStudio:~$ sudo apt-get update E: Could not get lock /var/lib/apt/lists/lock - open (11: Resource temporarily unavailable) E: Unable to lock directory /var/lib/apt/lists/ bob@bobStudio:~$ sudo rm /var/lib/dpkg/lock bob@bobStudio:~$ sudo apt-get update E: Could not get lock /var/lib/apt/lists/lock - open (11: Resource temporarily unavailable) E: Unable to lock directory /var/lib/apt/lists/ The good news is that, once I sorted out the file locks, this seems to have permanently aborted the setup of the font package, so at least I can do something else with dpkg. That leaves two questions: 1) How could I have broken the loop without actually crashing out of dpkg? 2) How can I set up the ttf-mscorefonts-installer package in the future? Is this download really broken, or is it 'just' a bad Internet connection?

    Read the article

  • Importing PKCS#12 (.p12) files into Firefox From the Command Line

    - by user11165
    I’ve posted this question up on #Ubuntu and #Firefox Forums, and really could do with some help.. Anyone know where i could look or help with the answer. I’m hoping the power of social media will come through… I have a need to perform the following action: Firefox 3.6.x: Quote: open Edit - Preferences - Advanced - Encryption - View Certificates - Your Certificates - Import However i need the same functionality from the bash command line. So far I’ve established that the following command is supposed to be used: Quote: certutil -A -t “u,u,u” -d /home/df001/.mozilla/firefox/qe5y5lht.tc.default/ -n “mycert” -i client.p12 This executes with no isses, however, doesn’t show up in any Firefox Certificate store. However, I have noted that prior to running this command, i have a cert8.db key3.db and secmod.db file in the above folder. After running the command the certutil seems to have created a cert9.db, key4.db and pkcs12.txt file Listing the contents using the command: Quote: certutil -L -d sql:/home/df001/.mozilla/firefox/qe5y5lht.tc.default/ does seem to confirm my attempts of importing files into a certificate folder of some kind have worked. because i get Quote: Certificate Nickname Trust Attributes SSL,S/MIME,JAR/XPI Thawte SSL CA „ Go Daddy Secure Certification Authority „ Thawte SGC CA „ Entrust Certification Authority - L1C „ My Nero CT,C,c mynero P„ davidfield - Internet Widgits Pty Ltd u,u,u So, having tried this, and heading back over to the www, i cam across this command: Quote: pk12util -d /home/df001/.mozilla/firefox/qe5y5lht.tc.default/ -i client.p12 -n “David Field” -P “cert8.db” this again, appears to be importing something somewhere, however, again, Viewing certs from the Firefox interface doesn’t show the imported Cert. I’m surmising here on reading that the certutil and pk12util are creating a new NSS database, which firefox isn’t reading. So my question is, how can i get the p12 cert from the command line so it displays in the firefox Certificate manager interface? Why have i posted this here? Why not post on the firefox forum? Well i will copy and post the same question there as well, however the ability to use the command line to do this is important, as I have potentially 2000 machines which will need a user cert imported into firefox via a p12 file. I need to do this in the form of a script, i thought the hard part was going to be making the p12 file from the microsoft 2003 CA, turns out thats easy. I can’t just import via the GUI and copy over cert8.db x 2000, i can’t ask users to use the CA webinterface as its for VPN access, the users are off site, and they need the VPN to get to the cert server.. Is there any person out there who can help? By the way, i don't have the tor buttun installed.

    Read the article

  • Apache doesn't load .php files

    - by Haddex
    First, sorry for my English and asking something that it's quite answered all over the web. I've read a lot of post about this problem but I still can't find the solution. I'm a web developer who recently moved to Ubuntu from Windows 7. I had a website done (it's online and working) and I set up LAMP to keep working with it. I made a test.php file with: <?php phpinfo(); ?> and put it on /var/www/html directory, it shows all the information about the php and I was really happy: "Ok, it's all done, tomorrow I will work hard" But I placed my whole web into /var/www/html , not in a folder, the index.php is in /var/www/html but guess what: doesn't load any of my .php files, the browser just keep thinking. What I did: I rebooted Apache: /etc/init.d/apache2 restart I tried again with the test.php file and it works fine I put in /var/www/html a .html file and works fine. I looked for /etc/apache2/sites-enable/000-default.conf and it says: DocumentRoot /var/www/html I looked for /etc/apache2/mods-enabled/dir.conf and it says: DirectoryIndex index.html index.cgi index.pl index.php ... Edit* I think it's something related to phpmyadmin, like if I'm not able to connect with the database. But I got nothing on the screen when trying to load the page so...I'm not sure. I can access to the url localhost/phpmyadmin and I edited the connection.php file like this: <?php # FileName="Connection_php_mysql.htm" # Type="MYSQL" # HTTP="true" $hostname_rakstadconnection = "localhost"; $database_rakstadconnection = "rakstadclandb"; $username_rakstadconnection = "root"; $password_rakstadconnection = "admin"; $rakstadconnection = mysql_connect($hostname_rakstadconnection, $username_rakstadconnection, $password_rakstadconnection) or trigger_error(mysql_error(),E_USER_ERROR); mysql_query("SET NAMES 'utf8'"); ?> The name of the database is correct, like the user and password. http://i89.photobucket.com/albums/k220/Haddex/Capturadepantallade2014-06-09112609_zpsc45ddb72.png http://i89.photobucket.com/albums/k220/Haddex/Capturadepantallade2014-06-09112120_zps0b9e15f7.png *Edit2: could this be because it's a website that I brought to Linux from Windows? I used Dreamweaver. Edit3: I changed the # to /*/, nothing. The error.log file says: [Mon Jun 09 17:08:13.627881 2014] [:error] [pid 1517] [client 127.0.0.1:46663] PHP Warning: require_once(/var/www/html/Connections/rakstadconnection.php): failed to open stream: Permission denied in /var/www/html/index.php on line 1 [Mon Jun 09 17:08:13.627933 2014] [:error] [pid 1517] [client 127.0.0.1:46663] PHP Fatal error: require_once(): Failed opening required 'Connections/rakstadconnection.php' (include_path='.:/usr/share/php:/usr/share/pear') in /var/www/html/index.php on line 1 I'm reading error log but...should I add a linux path into a my index.php file? Don't think so. Thanks.

    Read the article

  • ZFS for Database Log Files

    - by user12620111
    I've been troubled by drop outs in CPU usage in my application server, characterized by the CPUs suddenly going from close to 90% CPU busy to almost completely CPU idle for a few seconds. Here is an example of a drop out as shown by a snippet of vmstat data taken while the application server is under a heavy workload. # vmstat 1  kthr      memory            page            disk          faults      cpu  r b w   swap  free  re  mf pi po fr de sr s3 s4 s5 s6   in   sy   cs us sy id  1 0 0 130160176 116381952 0 16 0 0 0 0  0  0  0  0  0 207377 117715 203884 70 21 9  12 0 0 130160160 116381936 0 25 0 0 0 0 0  0  0  0  0 200413 117162 197250 70 20 9  11 0 0 130160176 116381920 0 16 0 0 0 0 0  0  1  0  0 203150 119365 200249 72 21 7  8 0 0 130160176 116377808 0 19 0 0 0 0  0  0  0  0  0 169826 96144 165194 56 17 27  0 0 0 130160176 116377800 0 16 0 0 0 0  0  0  0  0  1 10245 9376 9164 2  1 97  0 0 0 130160176 116377792 0 16 0 0 0 0  0  0  0  0  2 15742 12401 14784 4 1 95  0 0 0 130160176 116377776 2 16 0 0 0 0  0  0  1  0  0 19972 17703 19612 6 2 92  14 0 0 130160176 116377696 0 16 0 0 0 0 0  0  0  0  0 202794 116793 199807 71 21 8  9 0 0 130160160 116373584 0 30 0 0 0 0  0  0 18  0  0 203123 117857 198825 69 20 11 This behavior occurred consistently while the application server was processing synthetic transactions: HTTP requests from JMeter running on an external machine. I explored many theories trying to explain the drop outs, including: Unexpected JMeter behavior Network contention Java Garbage Collection Application Server thread pool problems Connection pool problems Database transaction processing Database I/O contention Graphing the CPU %idle led to a breakthrough: Several of the drop outs were 30 seconds apart. With that insight, I went digging through the data again and looking for other outliers that were 30 seconds apart. In the database server statistics, I found spikes in the iostat "asvc_t" (average response time of disk transactions, in milliseconds) for the disk drive that was being used for the database log files. Here is an example:                     extended device statistics     r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 2053.6    0.0 8234.3  0.0  0.2    0.0    0.1   0  24 c3t60080E5...F4F6d0s0     0.0 2162.2    0.0 8652.8  0.0  0.3    0.0    0.1   0  28 c3t60080E5...F4F6d0s0     0.0 1102.5    0.0 10012.8  0.0  4.5    0.0    4.1   0  69 c3t60080E5...F4F6d0s0     0.0   74.0    0.0 7920.6  0.0 10.0    0.0  135.1   0 100 c3t60080E5...F4F6d0s0     0.0  568.7    0.0 6674.0  0.0  6.4    0.0   11.2   0  90 c3t60080E5...F4F6d0s0     0.0 1358.0    0.0 5456.0  0.0  0.6    0.0    0.4   0  55 c3t60080E5...F4F6d0s0     0.0 1314.3    0.0 5285.2  0.0  0.7    0.0    0.5   0  70 c3t60080E5...F4F6d0s0 Here is a little more information about my database configuration: The database and application server were running on two different SPARC servers. Storage for the database was on a storage array connected via 8 gigabit Fibre Channel Data storage and log file were on different physical disk drives Reliable low latency I/O is provided by battery backed NVRAM Highly available: Two Fibre Channel links accessed via MPxIO Two Mirrored cache controllers The log file physical disks were mirrored in the storage device Database log files on a ZFS Filesystem with cutting-edge technologies, such as copy-on-write and end-to-end checksumming Why would I be getting service time spikes in my high-end storage? First, I wanted to verify that the database log disk service time spikes aligned with the application server CPU drop outs, and they did: At first, I guessed that the disk service time spikes might be related to flushing the write through cache on the storage device, but I was unable to validate that theory. After searching the WWW for a while, I decided to try using a separate log device: # zpool add ZFS-db-41 log c3t60080E500017D55C000015C150A9F8A7d0 The ZFS log device is configured in a similar manner as described above: two physical disks mirrored in the storage array. This change to the database storage configuration eliminated the application server CPU drop outs: Here is the zpool configuration: # zpool status ZFS-db-41   pool: ZFS-db-41  state: ONLINE  scan: none requested config:         NAME                                     STATE         ZFS-db-41                                ONLINE           c3t60080E5...F4F6d0  ONLINE         logs           c3t60080E5...F8A7d0  ONLINE Now, the I/O spikes look like this:                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1053.5    0.0 4234.1  0.0  0.8    0.0    0.7   0  75 c3t60080E5...F8A7d0s0                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1131.8    0.0 4555.3  0.0  0.8    0.0    0.7   0  76 c3t60080E5...F8A7d0s0                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1167.6    0.0 4682.2  0.0  0.7    0.0    0.6   0  74 c3t60080E5...F8A7d0s0     0.0  162.2    0.0 19153.9  0.0  0.7    0.0    4.2   0  12 c3t60080E5...F4F6d0s0                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1247.2    0.0 4992.6  0.0  0.7    0.0    0.6   0  71 c3t60080E5...F8A7d0s0     0.0   41.0    0.0   70.0  0.0  0.1    0.0    1.6   0   2 c3t60080E5...F4F6d0s0                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1241.3    0.0 4989.3  0.0  0.8    0.0    0.6   0  75 c3t60080E5...F8A7d0s0                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1193.2    0.0 4772.9  0.0  0.7    0.0    0.6   0  71 c3t60080E5...F8A7d0s0 We can see the steady flow of 4k writes to the ZIL device from O_SYNC database log file writes. The spikes are from flushing the transaction group. Like almost all problems that I run into, once I thoroughly understand the problem, I find that other people have documented similar experiences. Thanks to all of you who have documented alternative approaches. Saved for another day: now that the problem is obvious, I should try "zfs:zfs_immediate_write_sz" as recommended in the ZFS Evil Tuning Guide. References: The ZFS Intent Log Solaris ZFS, Synchronous Writes and the ZIL Explained ZFS Evil Tuning Guide: Cache Flushes ZFS Evil Tuning Guide: Tuning ZFS for Database Performance

    Read the article

< Previous Page | 62 63 64 65 66 67 68 69 70 71 72 73  | Next Page >