Search Results

Search found 68478 results on 2740 pages for 'file descriptor'.

Page 95/2740 | < Previous Page | 91 92 93 94 95 96 97 98 99 100 101 102  | Next Page >

  • [FIXED] Scan file contents into an array of a structure.

    - by ZaZu
    Hello, I have a structure in my program that contains a particular array. I want to scan a random file with numbers and put the contents into that array. This is my code : ( NOTE : This is a sample from a bigger program, so I need the structure and arrays as declared ) The contents of the file are basically : 5 4 3 2 5 3 4 2 #include<stdio.h> #define first 500 #define sec 500 struct trial{ int f; int r; float what[first][sec]; }; int trialtest(trial *test); main(){ trial test; trialtest(&test); } int trialtest(trial *test){ int z,x,i; FILE *fin; fin=fopen("randomfile.txt","r"); for(i=0;i<5;i++){ fscanf(fin,"%5.2f\t",(*test).what[z][x]); } fclose(fin); return 0; } But the problem is, whenever this I run this code, I get this error : (25) : warning 508 - Data of type 'double' supplied where a pointer is required I tried adding do{ for(i=0;i<5;i++){ q=fscanf(fin,"%5.2f\t",(*test).what[z][x]); } }while(q!=EOF); But that didnt work either, it gives the same error. Does anyone have a solution to this problem ?

    Read the article

  • List all files from a directory recursively with Java

    - by Hultner
    Okay I got this function who prints the name of all files in a directory recursively problem is that it's very slow and it gets the stuff from a network device and with my current code it has to access the device time after time. What I would want is to first load all the files from the directory recursively and then after that go through all files with the regex to filter out all the files I don't want. Unless anyone got a better suggestion. I've never before done anything like this. public static printFnames(String sDir){  File[] faFiles = new File(sDir).listFiles();  for(File file: faFiles){ if(file.getName().matches("^(.*?)")){   System.out.println(file.getAbsolutePath()); }   if(file.isDirectory()){     printFnames(file.getAbsolutePath());   }  } } This is just a test later on I'm not going to use the code like this, instead I'm going to add the path and modification date of every file which matches an advanced regex to an array.

    Read the article

  • Question about using an access database as a resource file in Visual Studio.

    - by user354303
    Hi I am trying to embed a Microsoft Access database file into my Class assembly DLL. I want my code to reference the resource file and use it with a ADODB.Connection object. Any body know a simpler way, or an easier way? Or what is wrong with my code, when i added the resource file it added me dataset definitions, but i have no idea what to do with those. The connection string I am trying below is from an automatically generated app.config. I did add the item as a resource... using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Data; using ConsoleApplication1.Resources;//SPPrinterLicenses using System.Data.OleDb; using ADODB; using System.Configuration; namespace ConsoleApplication1 { class SharePointPrinterManager { public static bool IsValidLicense(string HardwareID) { OleDbDataAdapter da = new OleDbDataAdapter(); DataSet ds = new DataSet(); ADODB.Connection adoCn = new Connection(); ADODB.Recordset adoRs = new Recordset(); //**open command below fails** adoCn.Open( @"Provider=Microsoft.ACE.OLEDB.12.0;Data Source=|DataDirectory|\Resources\SPPrinterLicenses.accdb;Persist Security Info=True", "", "", 1); adoRs.Open("Select * from AllWorkstationLicenses", adoCn, ADODB.CursorTypeEnum.adOpenForwardOnly, ADODB.LockTypeEnum.adLockReadOnly, 1); da.Fill(ds, adoRs, "AllworkstationLicenses"); adoCn.Close(); DataTable dt = new DataTable(); //ds.Tables. return true; } } }

    Read the article

  • Scala : cleanest way to recursively parse files checking for multiple strings

    - by fred basset
    Hi All, I want to write a Scala script to recursively process all files in a directory. For each file I'd like to see if there are any cases where a string occurs at line X and line X - 2. If a case like that occurs I'd like to stop processing that file, and add that filename to a map of filenames to occurrence counts. I just started learning Scala today, I've got the file recurse code working, and need some help with the string searching, here's what I have so far: import java.io.File import scala.io.Source val s1= "CmdNum = 506" val s2 = "Data = [0000,]" def processFile(f: File) { val lines = scala.io.Source.fromFile(f).getLines.toArray for (i = 0 to lines.length - 1) { // want to do string searches here, see if line contains s1 and line two lines above also contains s1 //println(lines(i)) } } def recursiveListFiles(f: File): Array[File] = { val these = f.listFiles if (these != null) { for (i = 0 to these.length - 1) { if (these(i).isFile) { processFile(these(i)) } } these ++ these.filter(_.isDirectory).flatMap(recursiveListFiles) } else { Array[File]() } } println(recursiveListFiles(new File(args(0))))

    Read the article

  • How to remove the first and last character from a file using batch script?

    - by infant programmer
    This is my input file content which I am using to copy to the output file. #sdfs|dfasf|sdfs| sdfs|df!@$%%*&!sdffs| sasdfasfa|dfsdf|#sdfs| What I need to do is to omit the first character '#' and last character '|' in the output file. So the output will be, sdfs|dfasf|sdfs| sdfs|df!@$%%*&!sdffs| sasdfasfa|dfsdf|#sdfs Batch script is new to me, but I tried my best and tried these codes, :: drop first and last char @echo off > xyz.txt & setLocal EnableDelayedExpansion for /f "tokens=* delims=" %%a in (E:\abc1.txt) do ( set str=%%a set str=!str:~1! echo !str!>> xyz.txt ) and @echo off > xyz.txt setLocal EnableDelayedExpansion for /f "tokens=1,2 delims= " %%a in (E:\abc1.txt) do ( set /a N+=1 if !N! gtr 2 ( echo %%a >> xyz.txt ) else ( set str=%%a set str=!str:#=! echo !str! >> xyz.txt ) ) As you can see they are not able to produce the required output.

    Read the article

  • How can I measure the time it takes a file to upload to a PHP script?

    - by Gustav Bertram
    I am trying to measure the time a script takes to upload a file to a PHP script, inside the receiving script. Here's a short example: <form action="#" enctype="multipart/form-data" method="post"> Upload: <input type="file" name="datafile" size="40"> <input type="submit" value="Send"> </form> <?php $upload_time = time() - $_SERVER['REQUEST_TIME']; echo $upload_time . " seconds."; I've submitted 4MB, 25MB, 100MB and 1.4GB files. They took anything from a fraction of a second, to almost a minute, yet the script above always yielded 0 seconds. NOTE: I know that this is a duplicate question, but the accepted answer on the other question did not work for me (at least, initially): Detect how long it takes for a file to upload (PHP) I've tried it using PHP 5.3.3 with Apache 2.2.16 on Ubuntu 10.4. UPDATE: I've discovered that both the Apache header and Ajax solutions work. In fact, the solution on the duplicate question also works. The critical element to making these solutions work is to ensure that the following values in the php.ini are sufficiently high: memory_limit = 1000M upload_max_filesize = 1000M post_max_size = 1000M I am accepting the Apache header solution, since it only uses PHP.

    Read the article

  • spliiting code in java-don't know what's wrong [closed]

    - by ???? ?????
    I'm writing a code to split a file into many files with a size specified in the code, and then it will join these parts later. The problem is with the joining code, it doesn't work and I can't figure what is wrong! This is my code: import java.io.*; import java.util.*; public class StupidSplit { static final int Chunk_Size = 10; static int size =0; public static void main(String[] args) throws IOException { String file = "b.txt"; int chunks = DivideFile(file); System.out.print((new File(file)).delete()); System.out.print(JoinFile(file, chunks)); } static boolean JoinFile(String fname, int nChunks) { /* * Joins the chunks together. Chunks have been divided using DivideFile * function so the last part of filename will ".partxxxx" Checks if all * parts are together by matching number of chunks found against * "nChunks", then joins the file otherwise throws an error. */ boolean successful = false; File currentDirectory = new File(System.getProperty("user.dir")); // File[] fileList = currentDirectory.listFiles(); /* populate only the files having extension like "partxxxx" */ List<File> lst = new ArrayList<File>(); // Arrays.sort(fileList); for (File file : fileList) { if (file.isFile()) { String fnm = file.getName(); int lastDot = fnm.lastIndexOf('.'); // add to list which match the name given by "fname" and have //"partxxxx" as extension" if (fnm.substring(0, lastDot).equalsIgnoreCase(fname) && (fnm.substring(lastDot + 1)).substring(0, 4).equals("part")) { lst.add(file); } } } /* * sort the list - it will be sorted by extension only because we have * ensured that list only contains those files that have "fname" and * "part" */ File[] files = (File[]) lst.toArray(new File[0]); Arrays.sort(files); System.out.println("size ="+files.length); System.out.println("hello"); /* Ensure that number of chunks match the length of array */ if (files.length == nChunks-1) { File ofile = new File(fname); FileOutputStream fos; FileInputStream fis; byte[] fileBytes; int bytesRead = 0; try { fos = new FileOutputStream(ofile,true); for (File file : files) { fis = new FileInputStream(file); fileBytes = new byte[(int) file.length()]; bytesRead = fis.read(fileBytes, 0, (int) file.length()); assert(bytesRead == fileBytes.length); assert(bytesRead == (int) file.length()); fos.write(fileBytes); fos.flush(); fileBytes = null; fis.close(); fis = null; } fos.close(); fos = null; } catch (FileNotFoundException fnfe) { System.out.println("Could not find file"); successful = false; return successful; } catch (IOException ioe) { System.out.println("Cannot write to disk"); successful = false; return successful; } /* ensure size of file matches the size given by server */ successful = (ofile.length() == StupidSplit.size) ? true : false; } else { successful = false; } return successful; } static int DivideFile(String fname) { File ifile = new File(fname); FileInputStream fis; String newName; FileOutputStream chunk; //int fileSize = (int) ifile.length(); double fileSize = (double) ifile.length(); //int nChunks = 0, read = 0, readLength = Chunk_Size; int nChunks = 0, read = 0, readLength = Chunk_Size; byte[] byteChunk; try { fis = new FileInputStream(ifile); StupidSplit.size = (int)ifile.length(); while (fileSize > 0) { if (fileSize <= Chunk_Size) { readLength = (int) fileSize; } byteChunk = new byte[readLength]; read = fis.read(byteChunk, 0, readLength); fileSize -= read; assert(read==byteChunk.length); nChunks++; //newName = fname + ".part" + Integer.toString(nChunks - 1); newName = String.format("%s.part%09d", fname, nChunks - 1); chunk = new FileOutputStream(new File(newName)); chunk.write(byteChunk); chunk.flush(); chunk.close(); byteChunk = null; chunk = null; } fis.close(); System.out.println(nChunks); // fis = null; } catch (FileNotFoundException fnfe) { System.out.println("Could not find the given file"); System.exit(-1); } catch (IOException ioe) { System.out .println("Error while creating file chunks. Exiting program"); System.exit(-1); }System.out.println(nChunks); return nChunks; } } }

    Read the article

  • Scan file contents into an array of a structure.

    - by ZaZu
    Hello, I have a structure in my program that contains a particular array. I want to scan a random file with numbers and put the contents into that array. This is my code : ( NOTE : This is a sample from a bigger program, so I need the structure and arrays as declared ) The contents of the file are basically : 5 4 3 2 5 3 4 2 #include<stdio.h> #define first 500 #define sec 500 struct trial{ int f; int r; float what[first][sec]; }; int trialtest(trial *test); main(){ trial test; trialtest(&test); } int trialtest(trial *test){ int z,x,i; FILE *fin; fin=fopen("randomfile.txt","r"); for(i=0;i<5;i++){ fscanf(fin,"%5.2f\t",(*test).what[z][x]); } fclose(fin); return 0; } But the problem is, whenever this I run this code, I get this error : (25) : warning 508 - Data of type 'double' supplied where a pointer is required I tried adding do{ for(i=0;i<5;i++){ q=fscanf(fin,"%5.2f\t",(*test).what[z][x]); } }while(q!=EOF); But that didnt work either, it gives the same error. Does anyone have a solution to this problem ?

    Read the article

  • SmtpClient and Locked File Attachments

    - by Rick Strahl
    Got a note a couple of days ago from a client using one of my generic routines that wraps SmtpClient. Apparently whenever a file has been attached to a message and emailed with SmtpClient the file remains locked after the message has been sent. Oddly this particular issue hasn’t cropped up before for me although these routines are in use in a number of applications I’ve built. The wrapper I use was built mainly to backfit an old pre-.NET 2.0 email client I built using Sockets to avoid the CDO nightmares of the .NET 1.x mail client. The current class retained the same class interface but now internally uses SmtpClient which holds a flat property interface that makes it less verbose to send off email messages. File attachments in this interface are handled by providing a comma delimited list for files in an Attachments string property which is then collected along with the other flat property settings and eventually passed on to SmtpClient in the form of a MailMessage structure. The jist of the code is something like this: /// <summary> /// Fully self contained mail sending method. Sends an email message by connecting /// and disconnecting from the email server. /// </summary> /// <returns>true or false</returns> public bool SendMail() { if (!this.Connect()) return false; try { // Create and configure the message MailMessage msg = this.GetMessage(); smtp.Send(msg); this.OnSendComplete(this); } catch (Exception ex) { string msg = ex.Message; if (ex.InnerException != null) msg = ex.InnerException.Message; this.SetError(msg); this.OnSendError(this); return false; } finally { // close connection and clear out headers // SmtpClient instance nulled out this.Close(); } return true; } /// <summary> /// Configures the message interface /// </summary> /// <param name="msg"></param> protected virtual MailMessage GetMessage() { MailMessage msg = new MailMessage(); msg.Body = this.Message; msg.Subject = this.Subject; msg.From = new MailAddress(this.SenderEmail, this.SenderName); if (!string.IsNullOrEmpty(this.ReplyTo)) msg.ReplyTo = new MailAddress(this.ReplyTo); // Send all the different recipients this.AssignMailAddresses(msg.To, this.Recipient); this.AssignMailAddresses(msg.CC, this.CC); this.AssignMailAddresses(msg.Bcc, this.BCC); if (!string.IsNullOrEmpty(this.Attachments)) { string[] files = this.Attachments.Split(new char[2] { ',', ';' }, StringSplitOptions.RemoveEmptyEntries); foreach (string file in files) { msg.Attachments.Add(new Attachment(file)); } } if (this.ContentType.StartsWith("text/html")) msg.IsBodyHtml = true; else msg.IsBodyHtml = false; msg.BodyEncoding = this.Encoding; … additional code omitted return msg; } Basically this code collects all the property settings of the wrapper object and applies them to the SmtpClient and in GetMessage() to an individual MailMessage properties. Specifically notice that attachment filenames are converted from a comma-delimited string to filenames from which new attachments are created. The code as it’s written however, will cause the problem with file attachments not being released properly. Internally .NET opens up stream handles and reads the files from disk to dump them into the email send stream. The attachments are always sent correctly but the local files are not immediately closed. As you probably guessed the issue is simply that some resources are not automatcially disposed when sending is complete and sure enough the following code change fixes the problem: // Create and configure the message using (MailMessage msg = this.GetMessage()) { smtp.Send(msg); if (this.SendComplete != null) this.OnSendComplete(this); // or use an explicit msg.Dispose() here } The Message object requires an explicit call to Dispose() (or a using() block as I have here) to force the attachment files to get closed. I think this is rather odd behavior for this scenario however. The code I use passes in filenames and my expectation of an API that accepts file names is that it uses the files by opening and streaming them and then closing them when done. Why keep the streams open and require an explicit .Dispose() by the calling code which is bound to lead to unexpected behavior just as my customer ran into? Any API level code should clean up as much as possible and this is clearly not happening here resulting in unexpected behavior. Apparently lots of other folks have run into this before as I found based on a few Twitter comments on this topic. Odd to me too is that SmtpClient() doesn’t implement IDisposable – it’s only the MailMessage (and Attachments) that implement it and require it to clean up for left over resources like open file handles. This means that you couldn’t even use a using() statement around the SmtpClient code to resolve this – instead you’d have to wrap it around the message object which again is rather unexpected. Well, chalk that one up to another small unexpected behavior that wasted a half an hour of my time – hopefully this post will help someone avoid this same half an hour of hunting and searching. Resources: Full code to SmptClientNative (West Wind Web Toolkit Repository) SmtpClient Documentation MSDN © Rick Strahl, West Wind Technologies, 2005-2010Posted in .NET  

    Read the article

  • PHP PSR-0 + several namespaces in one file and autoload

    - by Nemoden
    I've been thinking for a while about defining several namespaces in one php file and so, having several classes inside this file. Suppose, I want to implement something like Doctrine\ORM\Query\Expr: Expr.php Expr |-- Andx.php |-- Base.php |-- Comparison.php |-- Composite.php |-- From.php |-- Func.php |-- GroupBy.php |-- Join.php |-- Literal.php |-- Math.php |-- OrderBy.php |-- Orx.php `-- Select.php It would be nice if I had all of this in one file - Expr.php: namespace Doctrine\ORM\Query; class Expr { // code } namespace Doctrine\ORM\Query\Expr; class Func { // code } // etc... What I'm thinking of is directories naming convention and, unlike PSR-0 having several classes and namespaces in one file. It's best explained by the code: ls Doctrine/orm/query Expr.php that's it - only Expr.php Since Expr.php is somewhat I call a "meta-namespace" for Expr\Func, it make sense to place all the classes inside Expr.php (as shown above). So, the vendor name is still starts with an uppercased letter (Doctrine) and the other parts of namespace start with lowercased letter. We can write an autoload so it would respect this notion: function load_class($class) { if (class_exists($class)) { return true; } $tokenized_path = explode(array("_", "\\"), DIRECTORY_SEPARATOR, $class); // array('Doctrine', 'orm', 'query', 'Expr', 'Func'); // ^^^^ // first, we are looking for first uppercased namespace part // and if it's not last (not the class name), we use it as a filename // and wiping away the rest to compose a path to a file we need to include if (FALSE !== ($meta_class_index = find_meta_class($tokenized_path))) { $new_tokenized_path = array_slice($tokenized_path, 0, $meta_class_index); $path_to_class = implode(DIRECTORY_SEPARATOR, $new_tokenized_path); } else { // no meta class found $path_to_class = implode(DIRECTORY_SEPARATOR, $tokenized_path); } if (file_exists($path_to_class.'.php')) { require_once $path_to_class.'.php'; } return false; } Another reason to do so is to reduce a number of php files scattered among directories. Usually you check file existence before you require a file to fail gracefully: file_exists($path_to_class.'.php'); If you take a look at actual Doctrine\ORM\Query\Expr code, you'll see they use all of the "inner-classes", so you actually do: file_exists("/path/to/Doctrine/ORM/Query/Expr.php"); file_exists("/path/to/Doctrine/ORM/Query/Expr/AndX.php"); file_exists("/path/to/Doctrine/ORM/Query/Expr/Base.php"); file_exists("/path/to/Doctrine/ORM/Query/Expr/Comparison.php"); file_exists("/path/to/Doctrine/ORM/Query/Expr/Composite.php"); file_exists("/path/to/Doctrine/ORM/Query/Expr/From.php"); file_exists("/path/to/Doctrine/ORM/Query/Expr/Func.php"); file_exists("/path/to/Doctrine/ORM/Query/Expr/GroupBy.php"); file_exists("/path/to/Doctrine/ORM/Query/Expr/Join.php"); file_exists("/path/to/Doctrine/ORM/Query/Expr/Literal.php"); file_exists("/path/to/Doctrine/ORM/Query/Expr/Math.php"); file_exists("/path/to/Doctrine/ORM/Query/Expr/OrderBy.php"); file_exists("/path/to/Doctrine/ORM/Query/Expr/Orx.php"); file_exists("/path/to/Doctrine/ORM/Query/Expr/Select.php"); in your autoload which causes quite a few I/O reads. Isn't it too much to check on each user's hit? I'm just putting this on a discussion. I want to hear from another PHP programmers what do they think of it. And, of course, if you have a silver bullet addressing this problems I've designated here, please share. I also have been thinking if my vogue question fits here and according to the FAQ it seems like this question addresses "software architecture" problem slash proposal. I'm sorry if my scribble may seem a bit clunky :) Thanks.

    Read the article

  • "cannot open file userpref.blend@ for writing: Permission denied" in blender

    - by ganezdragon
    I'm using blender 2.69, installed via software centre, and when I save my user preference through File - User Preferences and click on "Save User Settings" there is a message "cannot open file /home/ganez/.config/blender/2.69/config/userpref.blend@ for writing permission denied" I have checked to the path /home/ganez/.config/blender/2.69/config/ and there is no userpref.blend file present. PS: I think this has something to do with file permission for that config folder and I have no idea on how to use the chmod command. So any advise? Thank you in advance.

    Read the article

  • Read-only file system

    - by John
    The title might not be as descriptive as I would like it to be but couldn't come up with a better one. My server's file system went into Read-only. And I don't understand why it does so and how to solve it. I can SSH into the server and when trying to start apache2 for example I get the following : username@srv1:~$ sudo service apache2 start [sudo] password for username: sudo: unable to open /var/lib/sudo/username/1: Read-only file system * Starting web server apache2 (30)Read-only file system: apache2: could not open error log file /var/log/apache2/error.log. Unable to open logs Action 'start' failed. The Apache error log may have more information. When I try restarting the server I get : username@srv1:~$ sudo shutdown -r now [sudo] password for username: sudo: unable to open /var/lib/sudo/username/1: Read-only file system Once I restart it manually it just start up without any warning or message saying something is wrong. I hope somebody could point me into the right direction to resolve this issue. Thanks in advance!

    Read the article

  • How to write PowerShell code part 1 (Using external xml configuration file)

    - by ybbest
    In this post, I will show you how to use external xml file with PowerShell. The advantage for doing so is that you can avoid other people to open up your PowerShell code to make the configuration changes; instead all they need to do is to change the xml file. I will refactor my site creation script as an example; you can download the script here and refactored code here. 1. As you can see below, I hard code all the variables in the script itself. $url = "http://ybbest" $WebsiteName = "Ybbest" $WebsiteDesc = "Ybbest test site" $Template = "STS#0" $PrimaryLogin = "contoso\administrator" $PrimaryDisplay = "administrator" $PrimaryEmail = "[email protected]" $MembersGroup = "$WebsiteName Members" $ViewersGroup = "$WebsiteName Viewers" 2. Next, I will show you how to manipulate xml file using PowerShell. You can use the get-content to grab the content of the file. [xml] $xmlconfigurations=get-content .\SiteCollection.xml 3. Then you can set it to variable (the variable has to be typed [xml] after that you can read the content of the xml content, PowerShell also give you nice IntelliSense by press the Tab key. [xml] $xmlconfigurations=get-content .\SiteCollection.xml $xmlconfigurations.SiteCollection $xmlconfigurations.SiteCollection.SiteName 4. After refactoring my code, I can set the variables using the xml file as below. #Set the parameters $siteInformation=$xmlinput.SiteCollection $url = $siteInformation.URL $siteName = $siteInformation.SiteName $siteDesc = $siteInformation.SiteDescription $Template = $siteInformation.SiteTemplate $PrimaryLogin = $siteInformation.PrimaryLogin $PrimaryDisplay = $siteInformation.PrimaryDisplayName $PrimaryEmail = $siteInformation.PrimaryLoginEmail $MembersGroup = "$WebsiteName Members" $ViewersGroup = "$WebsiteName Viewers"

    Read the article

  • BIP Debugging to a file

    - by Tim Dexter
    If you use the standalone server or with OBIEE and use OC4J as the web server. Have you ever taken a looksee at the console window (doc/xterm) that you use to start it. Ever turned on debugging to see masses of info flow by that window and want to capture it all? I have been debugging today and watched all that info fly by and on Windoze gets lost before you can see it! The BIP developers use the System.out.println() and System.err.println()methods in the BIP applications to generate debugging formation. Normally the output from these method calls go to the console where the OC4J process is started. However you can specify command line options when starting OC4J to direct the stdout and stderr output directly to files. The ?out and ?err parameters tell OC4J which file to direct the output to. All you need do is modify the oc4j.cmd file used to start BIP. I didnt get fancy and just plugged in the following to the file under the start section. I just modified the line: set CMDARGS=-config "%SERVER_XML%" -userThreads to set CMDARGS=-config "%SERVER_XML%" -out D:\BI\OracleBI\oc4j_bi\j2ee\home\log\oc4j.out -err D:\BI\OracleBI\oc4j_bi\j2ee\home\log\oc4j.err -userThreads Bounced the server and I now have a ballooning pair of debug files that I can pour over to my hearts content. The .out file appears to contain BIP only log info and the .err file, OBIEE messages. If you are using another web server to host BIP, just check out the user docs to find out how to get the log files to write. Note to self, remember to turn off the debug when Im done!

    Read the article

  • checking apt-get update lock file

    - by stewy613
    I have in stalled a dual boot beside windows and now I'm having a problem checking "apt-get update" when I type in apt-get update this is the outcome. I don't know what to do anthony@anthony-Inspiron-530s:~$ ls Desktop Downloads examples.desktop~ Pictures Templates Documents examples.desktop Music Public Videos anthony@anthony-Inspiron-530s:~$ apt-get update E: Could not open lock file /var/lib/apt/lists/lock - open (13: Permission denied) E: Unable to lock directory /var/lib/apt/lists/ E: Could not open lock file /var/lib/dpkg/lock - open (13: Permission denied) E: Unable to lock the administration directory (/var/lib/dpkg/), are you root? anthony@anthony-Inspiron-530s:~$ apt-get upgrade E: Could not open lock file /var/lib/dpkg/lock - open (13: Permission denied) E: Unable to lock the administration directory (/var/lib/dpkg/), are you root? anthony@anthony-Inspiron-530s:~$ cd apt-get update bash: cd: apt-get: No such file or directory anthony@anthony-Inspiron-530s:~$

    Read the article

  • How to write PowerShell code part 1 (Using external xml configuration file)

    - by ybbest
    In this post, I will show you how to use external xml file with PowerShell. The advantage for doing so is that you can avoid other people to open up your PowerShell code to make the configuration changes; instead all they need to do is to change the xml file. I will refactor my site creation script as an example; you can download the script here and refactored code here. 1. As you can see below, I hard code all the variables in the script itself. $url = "http://ybbest" $WebsiteName = "Ybbest" $WebsiteDesc = "Ybbest test site" $Template = "STS#0" $PrimaryLogin = "contoso\administrator" $PrimaryDisplay = "administrator" $PrimaryEmail = "[email protected]" $MembersGroup = "$WebsiteName Members" $ViewersGroup = "$WebsiteName Viewers" 2. Next, I will show you how to manipulate xml file using PowerShell. You can use the get-content to grab the content of the file. [xml] $xmlconfigurations=get-content .\SiteCollection.xml 3. Then you can set it to variable (the variable has to be typed [xml] after that you can read the content of the xml content, PowerShell also give you nice IntelliSense by press the Tab key. [xml] $xmlconfigurations=get-content .\SiteCollection.xml $xmlconfigurations.SiteCollection $xmlconfigurations.SiteCollection.SiteName 4. After refactoring my code, I can set the variables using the xml file as below. #Set the parameters $siteInformation=$xmlinput.SiteCollection $url = $siteInformation.URL $siteName = $siteInformation.SiteName $siteDesc = $siteInformation.SiteDescription $Template = $siteInformation.SiteTemplate $PrimaryLogin = $siteInformation.PrimaryLogin $PrimaryDisplay = $siteInformation.PrimaryDisplayName $PrimaryEmail = $siteInformation.PrimaryLoginEmail $MembersGroup = "$WebsiteName Members" $ViewersGroup = "$WebsiteName Viewers"

    Read the article

  • File added to project doesn't get added to packages

    - by lorin
    I'm creating customized binary versions of OpenStack nova packages (lp:nova) using their packaging scripts (lp:~openstack-ubuntu-packagers/ubuntu/natty/nova/ubuntu). I create binaries by doing: dpkg-buildpackage -b -rfakeroot -tc -uc -D This creates a set of packages (python-nova, nova-common, nova-compute, ...). In our customized version of the code (lp:~usc-isi/nova/hpc-trunk), we recently merged in some changes from another branch, and there's now a new file in our repository that wasn't in upstream: nova/virt/cpuinfo.xml.template. This file isn't getting added to any of the packages, where it should be added to python-nova. Why wouldn't dpkg-buildpackage be including this file? A more basic question: how does dpkg-buildpackage determine which files go in which packages? Is it related at all to the debian/watch file? This contains some URLs that are pointing to the upstream project. version=3 http://launchpad.net/nova/+download http://launchpad.net/nova/.*/nova-(.*)\.tar\.gz http://nova.openstack.org/tarballs/ nova-(.*).tar.gz

    Read the article

  • error while loading shared libraries; cannot open shared object file: No such file or directory

    - by glitchyme
    The program evince complains that it can't find libfreetype.so.6; however I clearly have the file and its included in my LD_LIBRARY_PATH; furthermore I have another program which uses libfreetype6 and is able to run just fine. What's going on here? jbud@jb-pc ~> evince evince: error while loading shared libraries: libfreetype.so.6: cannot open shared object file: No such file or directory jbud@jb-pc ~> ldd /usr/bin/evince | grep freetype libfreetype.so.6 => /usr/local/lib/libfreetype.so.6 (0x00007f912179d000) jbud@jb-pc ~> file /usr/local/lib/libfreetype.so.6 /usr/local/lib/libfreetype.so.6: symbolic link to `libfreetype.so.6.11.1' jbud@jb-pc ~> file /usr/local/lib/libfreetype.so.6.11.1 /usr/local/lib/libfreetype.so.6.11.1: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, BuildID[sha1]=0x21a4b8005e0c9a42af001b35fb984f4e25efc71c, not stripped jbud@jb-pc ~> echo $LD_LIBRARY_PATH /usr/lib/:/usr/lib64/:/usr/lib/x86_64-linux-gnu/:/usr/local/lib/ jbud@jb-pc ~> ldd jdrive/jstuff/work/personal/noengine/client | grep freetype libfreetype.so.6 => /usr/local/lib/libfreetype.so.6 (0x00007feb5ac89000)

    Read the article

  • SQL SERVER – Transaction Log Full – Transaction Log Larger than Data File – Notes from Fields #001

    - by Pinal Dave
    I am very excited to announce a new series on this blog – Notes from Fields. I have been blogging for almost 7 years on this blog and it has been a wonderful experience. Though, I have extensive experience with SQL and Databases, it is always a good idea that we consult experts for their advice and opinion. Following the same thought process, I have started this new series of Notes from Fields. In this series we will have notes from various experts in the database world. My friends at Linchpin People have graciously decided to support me in my new initiation.  Linchpin People are database coaches and wellness experts for a data driven world. In this very first episode of the Notes from Fields series database expert Tim Radney (partner at Linchpin People) explains a very common issue DBA and Developer faces in their career, when database logs fills up your hard-drive or your database log is larger than your data file. Read the experience of Tim in his own words. As a consultant, I encounter a number of common issues with clients.  One of the more common things I encounter is finding a user database in the FULL recovery model that does not make a regular transaction log backups or ever had a transaction log backup. When I find this, usually the transaction log is several times larger than the data file. Finding this issue is very significant to me in that it allows to me to discuss service level agreements with the client. I get to ask questions such as, are nightly full backups sufficient or do they need point in time recovery.  This conversation has now signed with the customer and gets them to thinking about their disaster recovery and high availability solutions. This issue is also very prominent on SQL Server forums and usually has the title of “Help, my transaction log has filled up my disk” or “Help, my transaction log is many times the size of my database”. In cases where the client only needs the previous full nights backup, I am able to change the recovery model to SIMPLE and shrink the transaction log using DBCC SHRINKFILE (2,1) or by specifying the transaction log file name by using DBCC SHRINKFILE (file_name, target_size). When the client needs point in time recovery then in most cases I will still end up switching the client to the SIMPLE recovery model to truncate the transaction log followed by a full backup. I will then schedule a SQL Agent job to make the regular transaction log backups with an interval determined by the client to meet their service level agreements. It should also be noted that typically when I find an overgrown transaction log the virtual log file count is also out of control. I clean up will always take that into account as well.  That is a subject for a future blog post. If your SQL Server is facing any issue we can Fix Your SQL Server. Additional reading: Monitoring SQL Server Database Transaction Log Space Growth – DBCC SQLPERF(logspace)  SQL SERVER – How to Stop Growing Log File Too Big Shrinking Truncate Log File – Log Full Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Backup and Restore, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • 7 Ubuntu File Manager Features You May Not Have Noticed

    - by Chris Hoffman
    The Nautilus file manager included with Ubuntu includes some useful features you may not notice unless you go looking for them. You can create saved searches, mount remote file systems, use tabs in your file manager, and more. Ubuntu’s file manager also includes built-in support for sharing folders on your local network – the Sharing Options dialog creates and configures network shares compatible with both Linux and Windows machines. How to Make Your Laptop Choose a Wired Connection Instead of Wireless HTG Explains: What Is Two-Factor Authentication and Should I Be Using It? HTG Explains: What Is Windows RT and What Does It Mean To Me?

    Read the article

  • Terminal error messages: bash: /dev/cgroup/cpu/user/2823/tasks: No such file or directory

    - by sasaenator
    This is what I get when I start up the terminal: bash: /dev/cgroup/cpu/user/2823/tasks: No such file or directory bash: /dev/cgroup/cpu/user/2823/notify_on_release: No such file or directory bash: /dev/cgroup/cpu/user/2823/tasks: No such file or directory bash: /dev/cgroup/cpu/user/2823/notify_on_release: No such file or directory sasa@sasa:~$*** I reinstalled 10.10 yesterday because of other problems, I didn't have this error message before. I have a separate /home partition, and new installation picked up almost all of the old settings, also those which I don't like, but it looks like that is not a problem or maybe I am wrong? Wouldn't ask if I knew! :) I'll be glad to post more info if someone needs it!

    Read the article

  • Why doesn't apache2 respect my envvars file?

    - by Avery Chan
    My envvar files has these lines in it: export APACHE_RUN_USER=www-data export APACHE_RUN_GROUP=www-data My apache2.conf has these lines in it: # These need to be set in /etc/apache2/envvars User ${APACHE_RUN_USER} Group ${APACHE_RUN_GROUP} But when I run apache2 -M I get this: apache2: bad user name ${APACHE_RUN_USER} A temporary fix is to hard-code www-data into it my apache2.conf file. There was some speculation here that this was because some configuration script didn't replace the env vars correctly in my apache2.conf file. Regardless how do I get apache2 to consult my envvars file? As another data point this site seems to indicate the envvars is generated at build, but read by apache2ctl at runtime, suggesting that this file isn't just poop leftover by the build process.

    Read the article

< Previous Page | 91 92 93 94 95 96 97 98 99 100 101 102  | Next Page >