Search Results

Search found 77259 results on 3091 pages for 'file writing'.

Page 136/3091 | < Previous Page | 132 133 134 135 136 137 138 139 140 141 142 143  | Next Page >

  • vhost.conf file in PLESK not working as intended

    - by Saif Bechan
    I have configured a vhost file for my domain but it does not seem to work. These are the steps I took, please correct me if I am wrong. First I made a file called vhost.conf in: /var/www/vhosts/*domain*/conf/vhost.conf The content of the vhost file looks like this: <Directory /var/www/vhosts/*domain*/httpdocs> php_admin_flag engine on php_admin_flag display_errors on </Directory> Now in my /etc/php.ini i set display_errors=Off After everything i rebuild with: /usr/local/psa/admin/sbin/websrvmng -a But I don't see the any errors in my page. When i turn on the display_errors in /etc/php.ini only then can I see the errors. I know for a fact that the vhost file is read, because when i type nonsense values i get an error when restarting apache saying there are errors in the vhost file. Anyone know what the problem can be. Should there be special settings in either the php.ini file or the httpd.conf file. The httpd.conf i edit is in /etc/httpd/conf/httpd.conf. Is this the file that PLESK uses or is there another, because the values i see there do not really reflect the http folders of my domain. The httpd file looks like this now. # The document root DocumentRoot "/var/www/html" # i guess this is the base directory <Directory /> Order Deny,Allow Deny from all Options None AllowOverride None </Directory> # And i guess here are all my domains located, but there aren't any here <Directory "/var/www/html"> Options None AllowOverride None Order allow,deny Allow from all </Directory> Only this directory /var/www/html is not used by me, I use the directory /var/www/vhosts. The only folder found in /var/www/html is a folder called awstats. Does plesk use other files, and where are they located. I hope this all makes sense to anyone, and i hope i can find a solution

    Read the article

  • Saving a file in a CSV type in Excel always removes the BOM

    - by rickp
    I've been trying to find a reasonable solution/explanation (unsuccessfully) to find out why Excel defaults to removing the BOM when saving a file to the CSV type. Please forgive me if you find this a duplicate of this question. This handles reading CSV files with non-ASCII encoding, but it doesn't cover saving the file back out (which is where the biggest issue lies). Here is my current situation (which I'm going to gather is common among localized software dealing with Unicode characters and a CSV format): We export data to a CSV format using UTF-16LE, ensuring the BOM is set (0xFFFE). We validate after the file is generated with a Hex editor to ensure it was set correctly. Open the file in Excel (for this example we're exporting Japanese characters) and witness that Excel handles loading the file with the correct encoding. Attempts to save this file will prompt you with a warning message indicating that the file may contain features that may not be compatible with Unicode encoding, but asks if you'd like to save anyway. If you select the Save As dialog, it will immediately ask you to save the file as "Unicode Text" rather than CSV. If you select the "CSV" extension and save the file it removes the BOM (obviously along with all the Japanese characters). Why would this happen? Is there a solution to this problem, or is this a known 'bug'/limitation of Excel? Additionally (as a side issue) it appears that Excel, when loading UTF-16LE encoded CSV files, only uses TAB delimiters. Again, is this another known 'bug'/limitation of Excel?

    Read the article

  • Saving a file in a CSV type in Excel always removes the BOM

    - by rickp
    I've been trying to find a reasonable solution/explanation (unsuccessfully) to find out why Excel defaults to removing the BOM when saving a file to the CSV type. Please forgive me if you find this a duplicate of this question. This handles reading CSV files with non-ASCII encoding, but it doesn't cover saving the file back out (which is where the biggest issue lies). Here is my current situation (which I'm going to gather is common among localized software dealing with Unicode characters and a CSV format): We export data to a CSV format using UTF-16LE, ensuring the BOM is set (0xFFFE). We validate after the file is generated with a Hex editor to ensure it was set correctly. Open the file in Excel (for this example we're exporting Japanese characters) and witness that Excel handles loading the file with the correct encoding. Attempts to save this file will prompt you with a warning message indicating that the file may contain features that may not be compatible with Unicode encoding, but asks if you'd like to save anyway. If you select the Save As dialog, it will immediately ask you to save the file as "Unicode Text" rather than CSV. If you select the "CSV" extension and save the file it removes the BOM (obviously along with all the Japanese characters). Why would this happen? Is there a solution to this problem, or is this a known 'bug'/limitation of Excel? Additionally (as a side issue) it appears that Excel, when loading UTF-16LE encoded CSV files, only uses TAB delimiters. Again, is this another known 'bug'/limitation of Excel?

    Read the article

  • xcopy Not Surpressing File/Directory Query

    - by Daniel Bingham
    Hey folks, I'm attempting to use xcopy to copy over a file from one machine to another on our network as part of a Java program. I'm calling xcopy like this: xcopy "C:\Program Files\path\to\my\file" "\\othermachine\c$\Documents and Settings\<myUserName>\Desktop\Test\path\in\directory\structure\to\file" /e /y /i Because I'm calling it from with in Java, I need all the prompts to be suppressed. For the most part, \i and \y have done exactly that. However, for this one file /i fails and I get the file or directory prompt. The result is that it hangs the entire program. I've also tried calling it with /s /t /q appended on to the existing options, to no avail. Why isn't /i working to suppress the File or Directory prompt? Is there an order I need to call the options in? Is there something else I need to do? EDIT: I should mention, the file is a text file - single line of text. It does not have an extension. It looks like this: FILE-NAME

    Read the article

  • Information on the BMPP File Extension/Format

    - by Angel Brighteyes
    I am looking for information on the file type BMPp. Namely I need an application that can create this file type, preferably open source or free. Wikipedia says for BMP File Format that 'BMPp' is a "type code", which is the "mechanism used by pre-OSX Macs ... to denote a files format..." (Look in the little info-box of general information under "Type code"). Continuing my research, I found an old 2009 archived mailing list "Re: Incorrect png file type 'PNG' that talks about something related to another problem a developer is having. In the response he talks about there being variant file types, and lists BMPp as being linked to an old version of Graphics Converter. The company Lemkesoft sells Graphics Converter, which I am not willing to purchase. I can't imagine that the only program in existence to make a BMPp file is that program. There has got to be another way to make that file type, other than creating a BMP file and just renaming it to BMPp (unless of course it is really that easy)? This is the first time I've run into this file format, and it took a bit on Google, Bing, and Wikipedia to find the information that I've posted here. Any further help would be appreciated.

    Read the article

  • What arguments to use to explain why a SQL DB is far better then a flat file

    - by jamone
    The higher ups in my company were told by good friends that flat files are the way to go, and we should switch from MS SQL server to them for everything we do. We have over 300 servers and hundreds of different databases. From just the few I'm involved with we have 10 billion records in quite a few of them with upwards of 100k new records a day and who knows how many updates... Me and a couple others need to come up with a response saying why we shouldn't do this. Most of our stuff is ASP.NET with some legacy ASP. We thought that making a simple console app that tests/times the same interactions between a flat file (stored on the network) and SQL over the network doing large inserts, searches, updates etc along with things like network disconnects randomly. This would show them how bad flat files can be espically when you are dealing with millions of records. What things should I use in my response? What should I do with my demo code to illustrate this? My sort list so far: Security Concurent access Performance with large ammounts of data Ammount of time to do such a massive rewrite/switch Lack of transactions PITA to map relational data to flat files I fear that this will be a great post on the Daily WTF someday if I can't stop it now.

    Read the article

  • File upload fails when user is authenticated. Using IIS7 Integrated mode.

    - by Nikkelmann
    These are the user identities my website tells me that it uses: Logged on: NT AUTHORITY\NETWORK SERVICE (Can not write any files at all) and Not logged on: WSW32\IUSR_77 (Can write files to any folder) I have a ASP.NET 4.0 website on a shared hosting IIS7 web server running in Integrated mode with 32-bit applications support enabled and MSSQL 2008. Using classic mode is not an option since I need to secure some static files and I use Routing. In my web.config file I have set the following: <system.webServer> <modules runAllManagedModulesForAllRequests="true" /> </system.webServer> My hosting company says that Impersonation is enabled by default on machine level, so this is not something I can change. I asked their support and they referred me to this article: http://www.codinghub.net/2010/08/differences-between-integrated-mode-and.html Citing this part: Different windows identity in Forms authentication When Forms Authentication is used by an application and anonymous access is allowed, the Integrated mode identity differs from the Classic mode identity in the following ways: * ServerVariables["LOGON_USER"] is filled. * Request.LogognUserIdentity uses the credentials of the [NT AUTHORITY\NETWORK SERVICE] account instead of the [NT AUTHORITY\INTERNET USER] account. This behavior occurs because authentication is performed in a single stage in Integrated mode. Conversely, in Classic mode, authentication occurs first with IIS 7.0 using anonymous access, and then with ASP.NET using Forms authentication. Thus, the result of the authentication is always a single user-- the Forms authentication user. AUTH_USER/LOGON_USER returns this same user because the Forms authentication user credentials are synchronized between IIS 7.0 and ASP.NET. A side effect is that LOGON_USER, HttpRequest.LogonUserIdentity, and impersonation no longer can access the Anonymous user credentials that IIS 7.0 would have authenticated by using Classic mode. How do I set up my website so that it can use the proper identity with the proper permissions? I've looked high and low for any answers regarding this specific problem, but found nil so far... I hope you can help!

    Read the article

  • Linux: Find all symlinks of a given 'original' file? (reverse 'readlink')

    - by sdaau
    Hi all, Consider the following command line snippet: $ cd /tmp/ $ mkdir dirA $ mkdir dirB $ echo "the contents of the 'original' file" > orig.file $ ls -la orig.file -rw-r--r-- 1 $USER $USER 36 2010-12-26 00:57 orig.file # create symlinks in dirA and dirB that point to /tmp/orig.file: $ ln -s $(pwd)/orig.file $(pwd)/dirA/ $ ln -s $(pwd)/orig.file $(pwd)/dirB/lorig.file $ ls -la dirA/ dirB/ dirA/: total 44 drwxr-xr-x 2 $USER $USER 4096 2010-12-26 00:57 . drwxrwxrwt 20 root root 36864 2010-12-26 00:57 .. lrwxrwxrwx 1 $USER $USER 14 2010-12-26 00:57 orig.file -> /tmp/orig.file dirB/: total 44 drwxr-xr-x 2 $USER $USER 4096 2010-12-26 00:58 . drwxrwxrwt 20 root root 36864 2010-12-26 00:57 .. lrwxrwxrwx 1 $USER $USER 14 2010-12-26 00:58 lorig.file -> /tmp/orig.file At this point, I can use readling to see what is the 'original' (well, I guess the usual term here is either 'target' or 'source', but those in my mind can be opposite concepts as well, so I'll just call it 'original') file of the symlinks, i.e. $ readlink -f dirA/orig.file /tmp/orig.file $ readlink -f dirB/lorig.file /tmp/orig.file ... However, what I'd like to know is - is there a command I could run on the 'original' file, and find all the symlinks that point to it? In other words, something like (pseudo): $ getsymlinks /tmp/orig.file /tmp/dirA/orig.file /tmp/dirB/lorig.file Thanks in advance for any comments, Cheers!

    Read the article

  • No Preview Images in File Open Dialogs on Windows 7

    Ive been updating some file uploader code in my photoalbum today and while I was working with the uploader I noticed that the File Open dialog using Silverlight that handles the file selections didnt allow me to ever see an image preview for image files. It sure would be nice if I could preview the images Im about to upload before selecting them from a list. Heres what my list looked like: This is the Medium Icon view, but regardless of the views available including Content view only icons are...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • HTG Explains: What “Everything Is a File” Means on Linux

    - by Chris Hoffman
    One of the defining features of Linux and other UNIX-like operating systems is that “everything is a file.” This is an oversimplification, but understanding what it means will help you understand how Linux works. Many things on Linux appear in your file system, but they aren’t actually files. They’re special files that represent hardware devices, system information, and other things — including a random number generator. These special files may be located in pseudo or virtual file systems such as /dev, which contains special files that represent devices, and /proc, which contains special files that represent system and process information. How to Banish Duplicate Photos with VisiPic How to Make Your Laptop Choose a Wired Connection Instead of Wireless HTG Explains: What Is Two-Factor Authentication and Should I Be Using It?

    Read the article

  • Exception:Cannot Start your application.The Workgroup information file is missing or opened exclusiv

    - by Jeev
    We were getting this error when trying   to connect  to a password protected access file. This is what the connection string looked likestring conString =@"Provider=Microsoft.Jet.OLEDB.4.0;Data Source="Path to your access file";User Id=;Password=password";To fix the issue this is what we didstring conString =@"Provider=Microsoft.Jet.OLEDB.4.0;Data Source="Path to your access file";Jet OLEDB:Database Password=password";  We removed the User id and changed the password to Jet OLEDB:Database Password Hope this helps someone   

    Read the article

  • You should NOT be writing jQuery in SharePoint if&hellip;

    - by Mark Rackley
    Yes… another one of these posts. What can I say? I’m a pot stirrer.. a rabble rouser *rabble rabble* jQuery in SharePoint seems to be a fairly polarizing issue with one side thinking it is the most awesome thing since Princess Leia as the slave girl in Return of the Jedi and the other half thinking it is the worst idea since Mannequin 2: On the Move. The correct answer is OF COURSE “it depends”. But what are those deciding factors that make jQuery an awesome fit or leave a bad taste in your mouth? Let’s see if I can drive the discussion here with some polarizing comments of my own… I know some of you are getting ready to leave your comments even now before reading the rest of the blog, which is great! Iron sharpens iron… These discussions hopefully open us up to understanding the entire process better and think about things in a different way. You should not be writing jQuery in SharePoint if you are not a developer… Let’s start off with my most polarizing and rant filled portion of the blog post. If you don’t know what you are doing or you don’t have a background that helps you understand the implications of what you are writing then you should not be writing jQuery in SharePoint! I truly believe that one of the biggest reasons for the jQuery haters is because of all the bad jQuery out there. If you don’t know what you are doing you can do some NASTY things! One of the best stories I’ve heard about this is from my good friend John Ferringer (@ferringer). John tells this story during our Mythbusters session we do together. One of his clients was undergoing a Denial of Service attack and they couldn’t figure out what was going on! After much searching they found that some genius jQuery developer wrote some code for an image rotator, but did not take into account what happens when there are no images to load! The code just kept hitting the servers over and over and over again which prevented anything else from getting done! Now, I’m NOT saying that I have not done the same sort of thing in the past or am immune from such mistakes. My point is that if you don’t know what you are doing, there are very REAL consequences that can have a major impact on your organization AND they will be hard to track down.  Think how happy your boss will be after you copy and pasted some jQuery from a blog without understanding what it does, it brings down the farm, AND it takes them 3 days to track it back to you.  :/ Good times will not be had. Like it or not JavaScript/jQuery is a programming language. While you .NET people sit on your high horses because your code is compiled and “runs faster” (also debatable), the rest of us will be actually getting work done and delivering solutions while you are trying to figure out why your widget won’t deploy. I can pick at that scab because I write .NET code too and speak from experience. I can do both, and do both well. So, I am not speaking from ignorance here. In JavaScript/jQuery you have variables, loops, conditionals, functions, arrays, events, and built in methods. If you are not a developer you just aren’t going to take advantage of all of that and use it correctly. Ahhh.. but there is hope! There is a lot of jQuery resources out there to help you learn and learn well! There are many experts on the subject that will gladly tell you when you are smoking crack. I just this minute saw a tweet from @cquick with a link to: “jQuery Fundamentals”. I just glanced through it and this may be a great primer for you aspiring jQuery devs. Take advantage of all the resources and become a developer! Hey, it will look awesome on your resume right? You should not be writing jQuery in SharePoint if it depends too much on client resources for a good user experience I’ve said it once and I’ll say it over and over until you understand. jQuery is executed on the client’s computer. Got it? If you are looping through hundreds of rows of data, searching through an enormous DOM, or performing many calculations it is going to take some time! AND if your user happens to be sitting on some old PC somewhere that they picked up at a garage sale their experience will be that much worse! If you can’t give the user a good experience they will not use the site. So, if jQuery is causing the user to have a bad experience, don’t use it. I sometimes go as far to say that you should NOT go to jQuery as a first option for external facing web sites because you have ZERO control over what the end user’s computer will be. You just can’t guarantee an awesome user experience all of the time. Ahhh… but you have no choice? (where have I heard that before?). Well… if you really have no choice, here are some tips to help improve the experience: Avoid screen scraping This is not 1999 and SharePoint is not an old green screen from a mainframe… so why are you treating it like it is? Screen scraping is time consuming and client intensive. Take advantage of tools like SPServices to do your data retrieval when possible. Fine tune your DOM searches A lot of time can be eaten up just searching the DOM and ignoring table rows that you don’t need. Write better jQuery to only loop through tables rows that you need, or only access specific elements you need. Take advantage of Element ID’s to return the one element you are looking for instead of looping through all the DOM over and over again. Write better jQuery Remember this is development. Think about how you can write cleaner, faster jQuery. This directly relates to the previous point of improving your DOM searches, but also when using arrays, variables and loops. Do you REALLY need to loop through that array 3 times? How can you knock it down to 2 times or even 1? When you have lots of calculations and data that you are manipulating every operation adds up. Think about how you can streamline it. Back in the old days before RAM was abundant, Cores were plentiful and dinosaurs roamed the earth, us developers had to take performance into account in everything we did. It’s a lost art that really needs to be used here. You should not be writing jQuery in SharePoint if you are sending a lot of data over the wire… Developer:  “Awesome… you can easily call SharePoint’s web services to retrieve and write data using SPServices!” Administrator: “Crap! you can easily call SharePoint’s web services to retrieve and write data using SPServices!” SPServices may indeed be the best thing that happened to SharePoint since the invention of SharePoint Saturdays by Godfather Lotter… BUT you HAVE to use it wisely! (I REFUSE to make the Spiderman reference). If you do not know what you are doing your code will bring back EVERY field and EVERY row from a list and push that over the internet with all that lovely XML wrapped around it. That can be a HUGE amount of data and will GREATLY impact performance! Calling several web service methods at the same time can cause the same problem and can negatively impact your SharePoint servers. These problems, thankfully, are not difficult to rectify if you are careful: Limit list data retrieved Use CAML to reduce the number of rows returned and limit the fields returned using ViewFields.  You should definitely be doing this regardless. If you aren’t I hope your admin thumps you upside the head. Batch large list updates You may or may not have noticed that if you try to do large updates (hundreds of rows) that the performance is either completely abysmal or it fails over half the time. You can greatly improve performance and avoid timeouts by breaking up your updates into several smaller updates. I don’t know if there is a magic number for best performance, it really depends on how much data you are sending back more than the number of rows. However, I have found that 200 rows generally works well.  Play around and find the right number for your situation. Delay Web Service calls when possible One of the cool things about jQuery and SPServices is that you can delay queries to the server until they are actually needed instead of doing them all at once. This can lead to performance improvements over DataViewWebParts and even .NET code in the right situations. So, don’t load the data until it’s needed. In some instances you may not need to retrieve the data at all, so why retrieve it ALL the time? You should not be writing jQuery in SharePoint if there is a better solution… jQuery is NOT the silver bullet in SharePoint, it is not the answer to every question, it is just another tool in the developers toolkit. I urge all developers to know what options exist out there and choose the right one! Sometimes it will be jQuery, sometimes it will be .NET,  sometimes it will be XSL, and sometimes it will be some other choice… So, when is there a better solution to jQuery? When you can’t get away from performance problems Sometimes jQuery will just give you horrible performance regardless of what you do because of unavoidable obstacles. In these situations you are going to have to figure out an alternative. Can I do it with a DVWP or do I have to crack open Visual Studio? When you need to do something that jQuery can’t do There are lots of things you can’t do in jQuery like elevate privileges, event handlers, workflows, or interact with back end systems that have no web service interface. It just can’t do everything. When it can be done faster and more efficiently another way Why are you spending time to write jQuery to do a DataViewWebPart that would take 5 minutes? Or why are you trying to implement complicated logic that would be simple to do in .NET? If your answer is that you don’t have the option, okay. BUT if you do have the option don’t reinvent the wheel! Take advantage of the other tools. The answer is not always jQuery… sorry… the kool-aid tastes good, but sweet tea is pretty awesome too. You should not be using jQuery in SharePoint if you are a moron… Let’s finish up the blog on a high note… Yes.. it’s true, I sometimes type things just to get a reaction… guess this section title might be a good example, but it feels good sometimes just to type the words that a lot of us think… So.. don’t be that guy! Another good buddy of mine that works for Microsoft told me. “I loved jQuery in SharePoint…. until I had to support it.”. He went on to explain that some user was making several web service calls on a page using jQuery and then was calling Microsoft and COMPLAINING because the page took so long to load… DUH! What do you expect to happen when you are pushing that much data over the wire and are making that many web service calls at once!! It’s one thing to write that kind of code and accept it’s just going to take a while, it’s COMPLETELY another issue to do that and then complain when it’s not lightning fast!  Someone’s gene pool needs some chlorine. So, I think this is a nice summary of the blog… DON’T be that guy… don’t be a moron. How can you stop yourself from being a moron? Ah.. glad you asked, here are some tips: Think Is jQuery the right solution to my problem? Is there a better approach? What are the implications and pitfalls of using jQuery in this situation? Search What are others doing? Does someone have a better solution? Is there a third party library that does the same thing I need? Plan Write good jQuery. Limit calculations and data sent over the wire and don’t reinvent the wheel when possible. Test Okay, it works well on your machine. Try it on others ESPECIALLY if this is for an external site. Test with empty data. Test with hundreds of rows of data. Test as many scenarios as possible. Monitor those server resources to see the impact there as well. Ask the experts As smart as you are, there are people smarter than you. Even the experts talk to each other to make sure they aren't doing something stupid. And for the MOST part they are pretty nice guys. Marc Anderson and Christophe Humbert are two guys who regularly keep me in line. Make sure you aren’t doing something stupid. Repeat So, when you think you have the best solution possible, repeat the steps above just to be safe.  Conclusion jQuery is an awesome tool and has come in handy on many occasions. I’m even teaching a 1/2 day SharePoint & jQuery workshop at the upcoming SPTechCon in Boston if you want to berate me in person. However, it’s only as awesome as the developer behind the keyboard. It IS development and has its pitfalls. Knowledge and experience are invaluable to giving the user the best experience possible.  Let’s face it, in the end, no matter our opinions, prejudices, or ego providing our clients, customers, and users with the best solution possible is what counts. Period… end of sentence…

    Read the article

  • Importing data from text file to specific columns using BULK INSERT

    - by Dinesh Asanka
    Bulk insert is much faster than using other techniques such as  SSIS. However, when you are using bulk insert you can’t insert to specific columns. If, for example, there are five columns in a table you should have five values for each record in the text file you are importing from. This is an issue when you are expecting default values to be inserted into tables. Let us say you have table as below: In this table, you are expecting ID, Status and CreatedDate to be updated automatically, so your text file may only have   FirstName  LastName  values as below: Dinesh,Asanka Saman,Liyanage Ruwan,Silva Susantha,Bathige Jude,Peires Sanjeewa,Jayawickrama If you use bulk insert to this table like follows, You will be returned an error: Bulk load data conversion error (type mismatch or invalid character for the specified codepage) for row 1, column 1 (ID). To avoid this you will need to create a view with the columns you are expecting to fill and use bulk insert against it. If you check the table now, you will see table with values in the text file and the default values.

    Read the article

  • File Upload multiple files in asp.net (similar to gmail)

    - by superstar
    Hi Guys, I need suggestions with regards to the multiple file upload using File Upload control in asp.net(along with C#). I have a File Upload Control, so i click the 'Browse' button and when i select a file from the select file dialog, i want the file to be shown as a link below the File Upload Control( somewhat similar to gmail). This file should be seen such a way that it can be deleted, if i wanted to. And also i should be able to upload another file from the File Upload control. All these files should be uploaded to a location when i use a button click event in the end. I think i have made myself clear. Any Suggestions are really helpful. Thanks.

    Read the article

  • What units does the ntp drift file use?

    - by arielf
    When the ntpd daemon is running, the file: /var/lib/ntp/ntp.drift gets updated periodically. Example: 17:20 hostname 118 ~> ls -l /var/lib/ntp/ntp.drift -rw-r--r-- 1 ntp ntp 7 May 20 16:46 /var/lib/ntp/ntp.drift # So it looks like it was last updated ~34 minutes ago The file has one number in it, for example, looking at a 4 virtual hosts, I find these values, respectively: -22.086 -10.214 -13.669 6.045 I assume these are seconds per day(?), but not sure. man ntpd mentions a different drift file /etc/ntp.drift which doesn't seem to exist. The man page doesn't explain what units are being used for the drift. Questions: Is /etc/ntp.drift actually /var/lib/ntp/ntp.drift on Ubuntu? What units is the drift expressed in? Thanks!

    Read the article

  • Display folder sizes in file manager

    - by wim
    In nautilus (or nemo) file manager, the "Size" column shows the filesize for files and the number of items contained in a folder for subdirectories: Number of items is not that important for me, it would be more useful if I could make this column show the total size contained under the directory. I had an extension on windows called foldersize which shows what I mean: I think it involved a service which ran in the background monitoring filesystem modifications in order to make sure the column was kept up to date. I am interested to know if there is any similar extension to nautilus, I would also be open to switching to another file manager to get this functionality. I am aware of the Disk Usage Analyser in Ubuntu, but what I'm looking for is a solution with file manager integration.

    Read the article

  • how to open a .pdf file in a panel or iframe using asp.net c#

    - by rahul
    I am trying to open a .pdf file on a button click. I want to open a .pdf file into a panel or some iframe. With the following code i can only open .pdf file in a separate window or in a save as mode. string filepath = Server.MapPath("News.pdf"); FileInfo file = new FileInfo(filepath); if (file.Exists) { Response.ClearContent(); Response.AddHeader("Content-Disposition", "inline; filename=" + file.Name); Response.AddHeader("Content-Length", file.Length.ToString()); Response.ContentType = ReturnExtension(file.Extension.ToLower()); Response.TransmitFile(file.FullName); Response.End(); } how to assign a iframe to the below line Response.AddHeader("Content-Disposition", "inline; filename=" + file.Name);

    Read the article

  • Running a .bash file in Eclipse

    - by Anne Ambe
    I know this is really an Eclipse issue but I can't seem to login in their forum. I am running eclipse juno for some c/c++ development.However, I wrote a .bash script that initiate the entire program.As input argument to this script, I have a a configuration file which is one directory lower than the .bash file. In terminal I just do: ./startenb.bash ./CONF/ANNE it runs just fine. How can I configure the external tools in eclipse to take this file path as input argument? Any help or old thread vaguely addressing this issue is highly welcome.

    Read the article

  • .NET Impersonate and file upload issues

    - by Jagd
    I have a webpage that allows a user to upload a file to a network share. When I run the webpage locally (within VS 2008) and try to upload the file, it works! However, when I deploy the website to the webserver and try to upload the file through the webpage, it doesn't work! The error being returned to me on the webserver says "Access to the path '\05prd1\emp\test.txt' is denied. So, obviously, this is a permissions issue. The network share is configured to allow full access both to me (NT authentication) and to the NETWORK SERVICE (which is .NET's default account and what we have set in our IIS application pool as the default user for this website). I have tried this with and without impersonation upon the webserver and neither way works, yet both ways work on my local machine (in other words, with and without impersonation works on my local machine). The code that does the file upload is below. Please note that the code below includes impersonation, but like I said above, I've tried it with and without impersonation and it's made no difference. if (fuCourses.PostedFile != null && fuCourses.PostedFile.ContentLength > 0) { System.Security.Principal.WindowsImpersonationContext impCtx; impCtx = ((System.Security.Principal.WindowsIdentity)User.Identity).Impersonate(); try { lblMsg.Visible = true; // The courses file to be uploaded HttpPostedFile file = fuCourses.PostedFile; string fName = file.FileName; string uploadPath = "\\\\05prd1\\emp\\"; // Get the file name if (fName.Contains("\\")) { fName = fName.Substring( fName.LastIndexOf("\\") + 1); } // Delete the courses file if it is already on \\05prd1\emp FileInfo fi = new FileInfo(uploadPath + fName); if (fi != null && fi.Exists) { fi.Delete(); } // Open new file stream on \\05prd1\emp and read bytes into it from file upload FileStream fs = File.Create(uploadPath + fName, file.ContentLength); using (Stream stream = file.InputStream) { byte[] b = new byte[4096]; int read; while ((read = stream.Read(b, 0, b.Length)) > 0) { fs.Write(b, 0, read); } } fs.Close(); lblMsg.Text = "File Successfully Uploaded"; lblMsg.ForeColor = System.Drawing.Color.Green; } catch (Exception ex) { lblMsg.Text = ex.Message; lblMsg.ForeColor = System.Drawing.Color.Red; } finally { impCtx.Undo(); } } Any help on this would be very appreciated!

    Read the article

  • How to call a JavaScript function from one frame to another in Chrome/Webkit with file protocol

    - by bambax
    I have developed an application that has a list of items in one frame; when one clicks on an item it does something in another frame (loads an image). This used to work fine in all browsers, including Chrome 3; now it still works fine in Firefox but in recent versions of Chrome (I believe since 4) it throws this error: Unsafe JavaScript attempt to access frame with URL (...) from frame with URL (...). Domains, protocols and ports must match. This is obviously a security "feature" but is it possible to get around it? Here is a simple test: index.html: <html> <frameset cols="50%,50%"> <frame src="left.html" name="left"/> <frame src="right.html" name="right"/> </frameset> </html> left.html: <html> <body> <a href="javascript:parent.right.test('hello');">click me</a> </body> </html> right.html: <html> <body> <script> function test(msg) { alert(msg); } </script> </body> </html> The above works in Firefox 3.6 and Chrome 3 but in Chrome 5 it throws the above error... Edit: added the @cols attribute to the frameset element in fact it works in Chrome if and only if the pages are served with the http protocol (and from the same domain) but my problem is when pages are local and served from a file:// protocol. Then it works in Firefox (all versions) and Chrome 3 but not Chrome 5 (I don't have Chrome 4 so I'm not shure about that specific version (and don't know if it's even possible to download a specific Chrome version?) -- but for Chrome 5 I'm very sure it doesn't work).

    Read the article

  • minidlna and samsung tv file format doesn't support

    - by Vasya
    I have PC with XUbuntu 12.04 and Samsung 27A950, so I want to use my PC like dlna server. But I can run file of any format on my TV, get error: "File format doesn't support". I try to open *.avi, *.mkv and *.jpg files the same result. In error log I can see: [2013/06/27 12:34:03] upnphttp.c:1907: error: Error opening /home/family/media/file1.mkv [2013/06/27 12:34:06] upnphttp.c:1907: error: Error opening /home/family/media/file2.avi So, I have no idea why it doesn't work. Any suggestions? miniDLNA version: Version 1.24.1-stedy Config file is default, only few directories added. With version of the default Ubuntu repository was the same. Some time ago I tried mediatomb, had the same error.

    Read the article

  • set installer background / not able to run early_command in custom preseed file (precise)

    - by user73093
    I have a custom preseed file for a Precise Live CD (which is well loaded on boot, I checked syslog for that). My initial problem is that when booting in install mode (default behavior for a Live CD), ubiquity runs X with a default wallpaper which is hardcoded to /usr/share/backgrounds/warty-final-ubuntu.png in Ubiquity code. So my idea was to run early_command (https://help.ubuntu.com/12.04/installation-guide/i386/preseed-advanced.html) to copy my custom wallpaper over /usr/share/backgrounds/warty-final-ubuntu.png. Assuming my custom wallpaper allready resides on the rootfs in /usr/share/backgrounds. But... It seems the early_command never runs (and I'm sure the preseed file is taken into account) Here is what I have added to my preseed file: d-i preseed/early_command string cp /usr/share/backgrounds/mywallpaper-defaults.jpg /usr/share/backgrounds/warty-final-ubuntu.png Even this one is never run: d-i preseed/early_command string /usr/bin/touch /tmp/testearly Thanks for helping !!

    Read the article

  • How to commit a file conversion?

    - by l0b0
    Say you've committed a file of type foo in your favorite vcs: $ vcs add data.foo $ vcs commit -m "My data" After publishing you realize there's a better data format bar. To convert you can use one of these solutions: $ vcs mv data.foo data.bar $ vcs commit -m "Preparing to use format bar" $ foo2bar --output data.bar data.bar $ vcs commit -m "Actual foo to bar conversion" or $ foo2bar --output data.foo data.foo $ vcs commit -m "Converted data to format bar" $ vcs mv data.foo data.bar $ vcs commit -m "Renamed to fit data type" or $ foo2bar --output data.bar data.foo $ vcs rm data.foo $ vcs add data.bar $ vcs commit -m "Converted data to format bar" In the first two cases the conversion is not an atomic operation and the file extension is "lying" in the first commit. In the last case the conversion will not be detected as a move operation, so as far as I can tell it'll be difficult to trace the file history across the commit. Although I'd instinctively prefer the last solution, I can't help thinking that tracing history should be given very high priority in version control. What is the best thing to do here?

    Read the article

  • file.createNewFile() creates files with last-modified time before actual creation time

    - by Kaleb Pederson
    I'm using JPoller to detect changes to files in a specific directory, but it's missing files because they end up with a timestamp earlier than their actual creation time. Here's how I test: public static void main(String [] files) { for (String file : files) { File f = new File(file); if (f.exists()) { System.err.println(file + " exists"); continue; } try { // find out the current time, I would hope to assume that the last-modified // time on the file will definitely be later than this System.out.println("-----------------------------------------"); long time = System.currentTimeMillis(); // create the file System.out.println("Creating " + file + " at " + time); f.createNewFile(); // let's see what the timestamp actually is (I've only seen it <time) System.out.println(file + " was last modified at: " + f.lastModified()); // well, ok, what if I explicitly set it to time? f.setLastModified(time); System.out.println("Updated modified time on " + file + " to " + time + " with actual " + f.lastModified()); } catch (IOException e) { System.err.println("Unable to create file"); } } } And here's what I get for output: ----------------------------------------- Creating test.7 at 1272324597956 test.7 was last modified at: 1272324597000 Updated modified time on test.7 to 1272324597956 with actual 1272324597000 ----------------------------------------- Creating test.8 at 1272324597957 test.8 was last modified at: 1272324597000 Updated modified time on test.8 to 1272324597957 with actual 1272324597000 ----------------------------------------- Creating test.9 at 1272324597957 test.9 was last modified at: 1272324597000 Updated modified time on test.9 to 1272324597957 with actual 1272324597000 The result is a race condition: JPoller records time of last check as xyz...123 File created at xyz...456 File last-modified timestamp actually reads xyz...000 JPoller looks for new/updated files with timestamp greater than xyz...123 JPoller ignores newly added file because xyz...000 is less than xyz...123 I pull my hair out for a while I tried digging into the code but both lastModified() and createNewFile() eventually resolve to native calls so I'm left with little information. For test.9, I lose 957 milliseconds. What kind of accuracy can I expect? Are my results going to vary by operating system or file system? Suggested workarounds? NOTE: I'm currently running Linux with an XFS filesystem. I wrote a quick program in C and the stat system call shows st_mtime as truncate(xyz...000/1000).

    Read the article

  • `power/persist` file not found in USB device sysfs directory

    - by intuited
    The file /usr/share/doc/linux-doc/usb/persist.txt.gz mentions that the USB-persist capability can be enabled for a given USB device by writing 1 to the file persist in that device's directory in /sys/bus/usb/devices/$device/power. This is said — if I understood correctly — to allow mountings of volumes on the drive to persist across power loss during suspend. However, I've discovered that the device I'd like to enable this facility for — a USB hard drive — does not have such a file in its corresponding directory, and that attempts to create it are rebuffed. Is there perhaps a kernel module that needs to be loaded for this to work? Do I need to build a custom kernel? I'm running ubuntu 10.10.

    Read the article

< Previous Page | 132 133 134 135 136 137 138 139 140 141 142 143  | Next Page >