Search Results

Search found 101795 results on 4072 pages for 'encrypting file system'.

Page 104/4072 | < Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >

  • Split file with PHP and generate contents

    - by user201140
    How do I split the content below into separate files without the placeholder tags. I'd also like to take the text inside the placeholder tags and place them inside a new contents file. <div class='placeholder'>The First Chapter</div> This is some text. <div class='placeholder'>The Second Chapter</div> This is some more text. <div class='placeholder'>Last Chapter</div> The last chapter. Thanks.

    Read the article

  • Reading in data from a file into an array

    - by Sam
    If I have an options file along the lines of this: size = 4 data = 1100010100110010 And I have a 2d size * size array that I want to populate the values in data into, what's the best way of doing it? To clarify, for the example I have I'd want an array like this: int[4][4] array = {{1,1,0,0}, {0,1,0,1}, {0,0,1,1}, {0,0,1,0}}. (Not real code but you get the idea). Size can be really be any number though. I'm thinking I'd have to read in the size, maloc an array and then maybe read in a string full of data then loop through each char in the data, cast it to an int and stick it in the appropriate index? But I really have no idea how to go about it, have been searching for a while with no luck. Any help would be cool! :)

    Read the article

  • Optimizing memory usage and changing file contents with PHP

    - by errata
    In a function like this function download($file_source, $file_target) { $rh = fopen($file_source, 'rb'); $wh = fopen($file_target, 'wb'); if (!$rh || !$wh) { return false; } while (!feof($rh)) { if (fwrite($wh, fread($rh, 1024)) === FALSE) { return false; } } fclose($rh); fclose($wh); return true; } what is the best way to rewrite last few bytes of a file with my custom string? Thanks!

    Read the article

  • Filtering Data in a Text File with Python

    - by YAS
    I'm new to Python (like Zygote new), and it's just to supplement another program but what I need is I have a text file that's a group of items for a game and it is formatted so: [1] Name=Blah Faction=Blahdiddly Cost=1000 [2] Name=Meh Faction=MehMeh Cost=2000 [3] Name=Lollypop Faction=Blahdiddly Cost=100 And I need to be able to find out what groups (the numbers in brackets) have matching values. So if I search Faction=Blahdiddly Group 1 & 3 will come up. I unfortunately have NO idea how to do this. Can anyone help?

    Read the article

  • Weird exception in linkbutton in a datalist

    - by user308806
    Dear all, I have written this datalist : <div class="story" runat="server"> <asp:DataList ID="DataList2" runat="server" Height="16px" Width="412px"> <SeparatorTemplate> <hr /> </SeparatorTemplate> <ItemTemplate> <asp:LinkButton ID="LinkButton1" runat ="server" Text='<%# Eval("Name") %>' PostBackUrl='<%#Eval("Url")%>' /> <br /> Description: <asp:Label ID="new" Text='<%#Eval("Description") %>' runat="server" /> </ItemTemplate> </asp:DataList> </div> It raises an exception saying that the linkbutton has to be placed in a tag that contains runat="server" although it exists. Here is the trace [HttpException (0x80004005): Le contrôle 'DataList2_ctl00_LinkButton1' de type 'LinkButton' doit être placé dans une balise form avec runat=server.] System.Web.UI.Page.VerifyRenderingInServerForm(Control control) +8689747 System.Web.UI.WebControls.LinkButton.AddAttributesToRender(HtmlTextWriter writer) +39 System.Web.UI.WebControls.WebControl.RenderBeginTag(HtmlTextWriter writer) +20 System.Web.UI.WebControls.WebControl.Render(HtmlTextWriter writer) +20 System.Web.UI.Control.RenderControlInternal(HtmlTextWriter writer, ControlAdapter adapter) +27 System.Web.UI.Control.RenderControl(HtmlTextWriter writer, ControlAdapter adapter) +99 System.Web.UI.Control.RenderControl(HtmlTextWriter writer) +25 System.Web.UI.Control.RenderChildrenInternal(HtmlTextWriter writer, ICollection children) +134 System.Web.UI.Control.RenderChildren(HtmlTextWriter writer) +19 System.Web.UI.WebControls.WebControl.RenderContents(HtmlTextWriter writer) +10 System.Web.UI.WebControls.DataListItem.RenderItemInternal(HtmlTextWriter writer, Boolean extractRows, Boolean tableLayout) +51 System.Web.UI.WebControls.DataListItem.RenderItem(HtmlTextWriter writer, Boolean extractRows, Boolean tableLayout) +57 System.Web.UI.WebControls.DataList.System.Web.UI.WebControls.IRepeatInfoUser.RenderItem(ListItemType itemType, Int32 repeatIndex, RepeatInfo repeatInfo, HtmlTextWriter writer) +64 System.Web.UI.WebControls.RepeatInfo.RenderVerticalRepeater(HtmlTextWriter writer, IRepeatInfoUser user, Style controlStyle, WebControl baseControl) +262 System.Web.UI.WebControls.RepeatInfo.RenderRepeater(HtmlTextWriter writer, IRepeatInfoUser user, Style controlStyle, WebControl baseControl) +27 System.Web.UI.WebControls.DataList.RenderContents(HtmlTextWriter writer) +208 System.Web.UI.WebControls.BaseDataList.Render(HtmlTextWriter writer) +30 System.Web.UI.Control.RenderControlInternal(HtmlTextWriter writer, ControlAdapter adapter) +27 System.Web.UI.Control.RenderControl(HtmlTextWriter writer, ControlAdapter adapter) +99 System.Web.UI.Control.RenderControl(HtmlTextWriter writer) +25 System.Web.UI.Control.RenderChildrenInternal(HtmlTextWriter writer, ICollection children) +134 System.Web.UI.Control.RenderChildren(HtmlTextWriter writer) +19 System.Web.UI.HtmlControls.HtmlContainerControl.Render(HtmlTextWriter writer) +32 System.Web.UI.Control.RenderControlInternal(HtmlTextWriter writer, ControlAdapter adapter) +27 System.Web.UI.Control.RenderControl(HtmlTextWriter writer, ControlAdapter adapter) +99 System.Web.UI.Control.RenderControl(HtmlTextWriter writer) +25 System.Web.UI.Control.RenderChildrenInternal(HtmlTextWriter writer, ICollection children) +134 System.Web.UI.Control.RenderChildren(HtmlTextWriter writer) +19 System.Web.UI.Page.Render(HtmlTextWriter writer) +29 System.Web.UI.Control.RenderControlInternal(HtmlTextWriter writer, ControlAdapter adapter) +27 System.Web.UI.Control.RenderControl(HtmlTextWriter writer, ControlAdapter adapter) +99 System.Web.UI.Control.RenderControl(HtmlTextWriter writer) +25 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +1266

    Read the article

  • Run bat file in Java and wait 2

    - by Savvas Dalkitsis
    This is a followup question to my other question : http://stackoverflow.com/questions/2434125/run-bat-file-in-java-and-wait The reason i am posting this as a separate question is that the one i already asked was answered correctly. From some research i did my problem is unique to my case so i decided to create a new question. Please go read that question before continuing with this one as they are closely related. Running the proposed code blocks the program at the waitFor invocation. After some research i found that the waitFor method blocks if your process has output that needs to be proccessed so you should first empty the output stream and the error stream. I did those things but my method still blocks. I then found a suggestion to simply loop while waiting the exitValue method to return the exit value of the process and handle the exception thrown if it is not, pausing for a brief moment as well so as not to consume all the CPU. I did this: import java.io.BufferedReader; import java.io.IOException; import java.io.InputStreamReader; public class Test { public static void main(String[] args) { try { Process p = Runtime.getRuntime().exec( "cmd /k start SQLScriptsToRun.bat" + " -UuserName -Ppassword" + " projectName"); final BufferedReader input = new BufferedReader(new InputStreamReader(p.getInputStream())); final BufferedReader error = new BufferedReader(new InputStreamReader(p.getErrorStream())); new Thread(new Runnable() { @Override public void run() { try { while (input.readLine()!=null) {} } catch (IOException e) { e.printStackTrace(); } } }).start(); new Thread(new Runnable() { @Override public void run() { try { while (error.readLine()!=null) {} } catch (IOException e) { e.printStackTrace(); } } }).start(); int i = 0; boolean finished = false; while (!finished) { try { i = p.exitValue(); finished = true; } catch (IllegalThreadStateException e) { e.printStackTrace(); try { Thread.sleep(500); } catch (InterruptedException e1) { e1.printStackTrace(); } } } System.out.println(i); } catch (IOException e) { e.printStackTrace(); } } } but my process will not end! I keep getting this error: java.lang.IllegalThreadStateException: process has not exited Any ideas as to why my process will not exit? Or do you have any libraries to suggest that handle executing batch files properly and wait until the execution is finished?

    Read the article

  • Instance Failure in asp.net

    - by user85511
    I have a web application that is working perfectly in my system. However, when I copied it to another system, I couldn't login to the application. There is an error: Server Error in '/' Application. -------------------------------------------------------------------------------- Instance failure. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.InvalidOperationException: Instance failure. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [InvalidOperationException: Instance failure.] System.Data.SqlClient.TdsParser.Connect(ServerInfo serverInfo, SqlInternalConnectionTds connHandler, Boolean ignoreSniOpenTimeout, Int64 timerExpire, Boolean encrypt, Boolean trustServerCert, Boolean integratedSecurity, SqlConnection owningObject) +4858423 System.Data.SqlClient.SqlInternalConnectionTds.AttemptOneLogin(ServerInfo serverInfo, String newPassword, Boolean ignoreSniOpenTimeout, Int64 timerExpire, SqlConnection owningObject) +90 System.Data.SqlClient.SqlInternalConnectionTds.LoginNoFailover(String host, String newPassword, Boolean redirectedUserInstance, SqlConnection owningObject, SqlConnectionString connectionOptions, Int64 timerStart) +257 System.Data.SqlClient.SqlInternalConnectionTds.OpenLoginEnlist(SqlConnection owningObject, SqlConnectionString connectionOptions, String newPassword, Boolean redirectedUserInstance) +221 System.Data.SqlClient.SqlInternalConnectionTds..ctor(DbConnectionPoolIdentity identity, SqlConnectionString connectionOptions, Object providerInfo, String newPassword, SqlConnection owningObject, Boolean redirectedUserInstance) +189 System.Data.SqlClient.SqlConnectionFactory.CreateConnection(DbConnectionOptions options, Object poolGroupProviderInfo, DbConnectionPool pool, DbConnection owningConnection) +4859187 System.Data.ProviderBase.DbConnectionFactory.CreatePooledConnection(DbConnection owningConnection, DbConnectionPool pool, DbConnectionOptions options) +31 System.Data.ProviderBase.DbConnectionPool.CreateObject(DbConnection owningObject) +433 System.Data.ProviderBase.DbConnectionPool.UserCreateRequest(DbConnection owningObject) +66 System.Data.ProviderBase.DbConnectionPool.GetConnection(DbConnection owningObject) +499 System.Data.ProviderBase.DbConnectionFactory.GetConnection(DbConnection owningConnection) +65 System.Data.ProviderBase.DbConnectionClosed.OpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory) +117 System.Data.SqlClient.SqlConnection.Open() +122 System.Web.DataAccess.SqlConnectionHolder.Open(HttpContext context, Boolean revertImpersonate) +87 System.Web.DataAccess.SqlConnectionHelper.GetConnection(String connectionString, Boolean revertImpersonation) +221 System.Web.Security.SqlMembershipProvider.GetPasswordWithFormat(String username, Boolean updateLastLoginActivityDate, Int32& status, String& password, Int32& passwordFormat, String& passwordSalt, Int32& failedPasswordAttemptCount, Int32& failedPasswordAnswerAttemptCount, Boolean& isApproved, DateTime& lastLoginDate, DateTime& lastActivityDate) +815 System.Web.Security.SqlMembershipProvider.CheckPassword(String username, String password, Boolean updateLastLoginActivityDate, Boolean failIfNotApproved, String& salt, Int32& passwordFormat) +105 System.Web.Security.SqlMembershipProvider.CheckPassword(String username, String password, Boolean updateLastLoginActivityDate, Boolean failIfNotApproved) +42 System.Web.Security.SqlMembershipProvider.ValidateUser(String username, String password) +78 System.Web.UI.WebControls.Login.AuthenticateUsingMembershipProvider(AuthenticateEventArgs e) +60 System.Web.UI.WebControls.Login.OnAuthenticate(AuthenticateEventArgs e) +119 System.Web.UI.WebControls.Login.AttemptLogin() +115 System.Web.UI.WebControls.Login.OnBubbleEvent(Object source, EventArgs e) +101 System.Web.UI.Control.RaiseBubbleEvent(Object source, EventArgs args) +37 System.Web.UI.WebControls.Button.OnCommand(CommandEventArgs e) +118 System.Web.UI.WebControls.Button.RaisePostBackEvent(String eventArgument) +166 System.Web.UI.WebControls.Button.System.Web.UI.IPostBackEventHandler.RaisePostBackEvent(String eventArgument) +10 System.Web.UI.Page.RaisePostBackEvent(IPostBackEventHandler sourceControl, String eventArgument) +13 System.Web.UI.Page.RaisePostBackEvent(NameValueCollection postData) +36 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +1565 -------------------------------------------------------------------------------- Version Information: Microsoft .NET Framework Version:2.0.50727.3053; ASP.NET Version:2.0.50727.3053 What could be the reason for such an error? How could I solve this?

    Read the article

  • Versioning freindly, extendible binary file format

    - by Bas Bossink
    In the project I'm currently working on there is a need to save a sizeable data structure to disk. Being in optimist I thought their must be a standard solution for such a problem however upto now I haven't found a solution that satisfies the following requirements: .net 2.0 support, preferably with a foss implementation version friendly (this should be interpreted as reading an old version of the format should be relatively simple if the changes in the underlying data structure are simple, say adding/dropping fields) ability to do some form of random access where part of the data can be extended after initial creation (think of this as extending intermediate results) space and time efficient (xml has been excluded as option given this requierement) Options considered so far: Protocol Buffers : was turned down by verdict of the documentation about Large Data Sets since this comment suggest adding another layer on top, this would call for additional complexity which I wish to have handled by the file format itself. HDF5,EXI : do not seem to have .net implementations SQLite : the data structure at hand would result in a pretty complex table structure that seems to heavyweight for the intended use BSON : does not appear to support requirement 3. Fast Infoset : only seems to have buyware .net implementations Any recommendations or pointers are greatly appreciated. Furthermore if you believe any of the information above is not true please provide pointers/examples to proove me wrong.

    Read the article

  • Flow control in a batch file

    - by dboarman-FissureStudios
    Reference Iterating arrays in a batch file I have the following: for /f "tokens=1" %%Q in ('query termserver') do ( if not ERRORLEVEL ( echo Checking %%Q for /f "tokens=1" %%U in ('query user %UserID% /server:%%Q') do (echo %%Q) ) ) When running query termserver from the command line, the first two lines are: Known ------------------------- ...followed by the list of terminal servers. However, I do not want to include these as part of the query user command. Also, there are about 4 servers I do not wish to include. When I supply UserID with this code, the program is promptly exiting. I know it has something to do with the if statement. Is this not possible to nest flow control inside the for-loop? I had tried setting a variable to exactly the names of the servers I wanted to check, but the iteration would end on the first server: set TermServers=Server1.Server2.Server3.Server7.Server8.Server10 for /f "tokens=2 delims=.=" %%Q in ('set TermServers') do ( echo Checking %%Q for /f "tokens=1" %%U in ('query user %UserID% /server:%%Q') do (echo %%Q) ) I would prefer this second example over the first if nothing else for cleanliness. Any help regarding either of these issues would be greatly appreciated.

    Read the article

  • Versioning friendly, extendible binary file format

    - by Bas Bossink
    In the project I'm currently working on there is a need to save a sizable data structure to disk (edit: think dozens of MB's). Being an optimist, I thought that there must be a standard solution for such a problem; however, up to now I haven't found a solution that satisfies the following requirements: .NET 2.0 support, preferably with a FOSS implementation Version friendly (this should be interpreted as: reading an old version of the format should be relatively simple if the changes in the underlying data structure are simple, say adding/dropping fields) Ability to do some form of random access where part of the data can be extended after initial creation (think of this as extending intermediate results) Space and time efficient (XML has been excluded as option given this requirement) Options considered so far: Protocol Buffers: was turned down by verdict of the documentation about Large Data Sets - since this comment suggested adding another layer on top, this would call for additional complexity which I wish to have handled by the file format itself. HDF5,EXI: do not seem to have .net implementations SQLite/SQL Server Compact edition: the data structure at hand would result in a pretty complex table structure that seems too heavyweight for the intended use BSON: does not appear to support requirement 3. Fast Infoset: only seems to have paid .NET implementations. Any recommendations or pointers are greatly appreciated. Furthermore if you believe any of the information above is not true, please provide pointers/examples to prove me wrong.

    Read the article

  • How to save a picture to a file?

    - by Peter vdL
    I'm trying to use a standard Intent that will take a picture, then allow approval or retake. Then I want to save the picture into a file. Here's the Intent I am using: Intent intent = new Intent(android.provider.MediaStore.ACTION_IMAGE_CAPTURE ); startActivityForResult( intent, 22 ); The docs at http://developer.android.com/reference/android/provider/MediaStore.html#ACTION_IMAGE_CAPTURE say "The caller may pass an extra EXTRA_OUTPUT to control where this image will be written. If the EXTRA_OUTPUT is not present, then a small sized image is returned as a Bitmap object in the extra field. If the EXTRA_OUTPUT is present, then the full-sized image will be written to the Uri value of EXTRA_OUTPUT." I don't pass extra output, I hope to get a Bitmap object in the extra field of the Intent passed into onActivityResult() (for this request). So where/how do you extract it? Intent has a getExtras(), but that returns a Bundle, and Bundle wants a key string to give you something back. What do you invoke on the Intent to extract the bitmap?

    Read the article

  • aio_read from file error on OS X

    - by Pyetras
    The following code: #include <fcntl.h> #include <unistd.h> #include <stdio.h> #include <aio.h> #include <errno.h> int main (int argc, char const *argv[]) { char name[] = "abc"; int fdes; if ((fdes = open(name, O_RDWR | O_CREAT, 0600 )) < 0) printf("%d, create file", errno); int buffer[] = {0, 1, 2, 3, 4, 5}; if (write(fdes, &buffer, sizeof(buffer)) == 0){ printf("writerr\n"); } struct aiocb aio; int n = 2; while (n--){ aio.aio_reqprio = 0; aio.aio_fildes = fdes; aio.aio_offset = sizeof(int); aio.aio_sigevent.sigev_notify = SIGEV_NONE; int buffer2; aio.aio_buf = &buffer2; aio.aio_nbytes = sizeof(buffer2); if (aio_read(&aio) != 0){ printf("%d, readerr\n", errno); }else{ const struct aiocb *aio_l[] = {&aio}; if (aio_suspend(aio_l, 1, 0) != 0){ printf("%d, suspenderr\n", errno); }else{ printf("%d\n", *(int *)aio.aio_buf); } } } return 0; } Works fine on Linux (Ubuntu 9.10, compiled with -lrt), printing 1 1 But fails on OS X (10.6.6 and 10.6.5, I've tested it on two machines): 1 35, readerr Is this possible that this is due to some library error on OS X, or am I doing something wrong?

    Read the article

  • [UNIX] Sort lines of massive file by number of words on line (ideally in parallel)

    - by conradlee
    I am working on a community detection algorithm for analyzing social network data from Facebook. The first task, detecting all cliques in the graph, can be done efficiently in parallel, and leaves me with an output like this: 17118 17136 17392 17064 17093 17376 17118 17136 17356 17318 12345 17118 17136 17356 17283 17007 17059 17116 Each of these lines represents a unique clique (a collection of node ids), and I want to sort these lines in descending order by the number of ids per line. In the case of the example above, here's what the output should look like: 17118 17136 17356 17318 12345 17118 17136 17356 17283 17118 17136 17392 17064 17093 17376 17007 17059 17116 (Ties---i.e., lines with the same number of ids---can be sorted arbitrarily.) What is the most efficient way of sorting these lines. Keep the following points in mind: The file I want to sort could be larger than the physical memory of the machine Most of the machines that I'm running this on have several processors, so a parallel solution would be ideal An ideal solution would just be a shell script (probably using sort), but I'm open to simple solutions in python or perl (or any language, as long as it makes the task simple) This task is in some sense very easy---I'm not just looking for any old solution, but rather for a simple and above all efficient solution

    Read the article

  • What arguments to use to explain why SQL Server is far better then a flat file

    - by jamone
    The higher ups in my company were told by good friends that flat files are the way to go, and we should switch from SQL Server to them for everything we do. We have over 300 servers and hundreds of different databases. From just the few I'm involved with we have 10 billion records in quite a few of them with upwards of 100k new records a day and who knows how many updates... Me and a couple others need to come up with a response saying why we shouldn't do this. Most of our stuff is ASP.NET with some legacy ASP. We thought that making a simple console app that tests/times the same interactions between a flat file (stored on the network) and SQL over the network doing large inserts, searches, updates etc along with things like network disconnects randomly. This would show them how bad flat files can be especially when you are dealing with millions of records. What things should I use in my response? What should I do with my demo code to illustrate this? My sort list so far: Security Concurrent access Performance with large amounts of data Amount of time to do such a massive rewrite/switch Lack of transactions PITA to map relational data to flat files NTFS doesn't support tons of files in a directory well I fear that this will be a great post on the Daily WTF someday if I can't stop it now.

    Read the article

  • Batch File to Delete Folders

    - by Homebrew
    I found some code to delete folders, in this case deleting all but 'n' # of folders. I created 10 test folders, plus 1 that was already there. I want to delete all but 4. The code works, it leaves 4 of my test folders, except that it also leaves the other folder. Is there some attribute of the other folder that's getting checked in the batch file that's stopping it from getting deleted ? It was created through a job a couple of weeks ago. Here's the code I stole (but don't really understand the details): rem DOS - Delete Folders if # folders > n @Echo Off :: User Variables :: Set this to the number of folders you want to keep Set _NumtoKeep=4 :: Set this to the folder that contains the folders to check and delete Set _Path=C:\MyFolder_Temp\FolderTest If Exist "%temp%\tf}1{" Del "%temp%\tf}1{" PushD %_Path% Set _s=%_NumtoKeep% If %_NumtoKeep%==1 set _s=single For /F "tokens=* skip=%_NumtoKeep%" %%I In ('dir "%_Path%" /AD /B /O-D /TW') Do ( If Exist "%temp%\tf}1{" ( Echo %%I:%%~fI >>"%temp%\tf}1{" ) Else ( Echo.>"%temp%\tf}1{" Echo Do you wish to delete the following folders?>>"%temp%\tf}1{" Echo Date Name>>"%temp%\tf}1{" Echo %%I:%%~fI >>"%temp%\tf}1{" )) PopD If Not Exist "%temp%\tf}1{" Echo No Folders Found to delete & Goto _Done Type "%temp%\tf}1{" | More Set _rdflag= /q Goto _Removeold Set _rdflag= :_Removeold For /F "tokens=1* skip=3 Delims=:" %%I In ('type "%temp%\tf}1{"') Do ( If "%_rdflag%"=="" Echo Deleting rd /s%_rdflag% "%%J") :_Done If Exist "%temp%\tf}1{" Del "%temp%\tf}1{"

    Read the article

  • Question about using an access database as a resource file in Visual Studio.

    - by user354303
    Hi I am trying to embed a Microsoft Access database file into my Class assembly DLL. I want my code to reference the resource file and use it with a ADODB.Connection object. Any body know a simpler way, or an easier way? Or what is wrong with my code, when i added the resource file it added me dataset definitions, but i have no idea what to do with those. The connection string I am trying below is from an automatically generated app.config. I did add the item as a resource... using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Data; using ConsoleApplication1.Resources;//SPPrinterLicenses using System.Data.OleDb; using ADODB; using System.Configuration; namespace ConsoleApplication1 { class SharePointPrinterManager { public static bool IsValidLicense(string HardwareID) { OleDbDataAdapter da = new OleDbDataAdapter(); DataSet ds = new DataSet(); ADODB.Connection adoCn = new Connection(); ADODB.Recordset adoRs = new Recordset(); //**open command below fails** adoCn.Open( @"Provider=Microsoft.ACE.OLEDB.12.0;Data Source=|DataDirectory|\Resources\SPPrinterLicenses.accdb;Persist Security Info=True", "", "", 1); adoRs.Open("Select * from AllWorkstationLicenses", adoCn, ADODB.CursorTypeEnum.adOpenForwardOnly, ADODB.LockTypeEnum.adLockReadOnly, 1); da.Fill(ds, adoRs, "AllworkstationLicenses"); adoCn.Close(); DataTable dt = new DataTable(); //ds.Tables. return true; } } }

    Read the article

  • NullReferenceExeption when reading from a file

    - by Whitey
    I need to read a file structured like this: 01000 00030 00500 03000 00020 And put it in an array like this: int[,] iMap = new int[iMapHeight, iMapWidth] { {0, 1, 0, 0, 0}, {0, 0, 0, 3, 0}, {0, 0, 5, 0, 0}, {0, 3, 0, 0, 0}, {0, 0, 0, 2, 0}, }; Hopefully you see what I'm trying to do here. I was confused how to do this so I asked here on SO, but the code I got from it gets this error: Object reference not set to an instance of an object. I'm pretty new to this so I have no idea how to fix it... I only barely know the code: protected void ReadMap(string mapPath) { using (var reader = new StreamReader(mapPath)) { for (int i = 0; i < iMapHeight; i++) { string line = reader.ReadLine(); for (int j = 0; j < iMapWidth; j++) { iMap[i, j] = (int)(line[j] - '0'); } } } } The line I get the error on is this: iMap[i, j] = (int)(line[j] - '0'); Can anyone provide a solution? Thank you. :)

    Read the article

  • C++ File manipulation problem

    - by Carlucho
    I am trying to open a file which normally has content, for the purpose of testing i will like to initialize the program without the files being available/existing so then the program should create empty ones, but am having issues implementing it. This is my code originally void loadFiles() { fstream city; city.open("city.txt", ios::in); fstream latitude; latitude.open("lat.txt", ios::in); fstream longitude; longitude.open("lon.txt", ios::in); while(!city.eof()){ city >> cityName; latitude >> lat; longitude >> lon; t.add(cityName, lat, lon); } city.close(); latitude.close(); longitude.close(); } I have tried everything i can think of, ofstream, ifstream, adding ios::out all all its variations. Could anybody explain me what to do in order to fix the problem. Thanks!

    Read the article

  • loading files through one file to hide locations

    - by Phil Jackson
    Hello all. Im currently doing a project in which my client does not want to location ( ie folder names and locations ) to be displayed. so I have done something like this: <link href="./?0000=css&0001=0001&0002=css" rel="stylesheet" type="text/css" /> <link href="./?0000=css&0001=0002&0002=css" rel="stylesheet" type="text/css" /> <script src="./?0000=js&0001=0000&0002=script" type="text/javascript"></script> </head> <body> <div id="wrapper"> <div id="header"> <div id="left_header"> <img src="./?0000=jpg&0001=0001&0002=pic" width="277" height="167" alt="" /> </div> <div id="right_header"> <div id="top-banner"></div> <ul id="navigation"> <li><a href="#" title="#" id="nav-home">Home</a></li> <li><a href="#" title="#">Signup</a></li> all works but my question being is or will this cause any complications i.e. speed of the site as all requests are being made to one single file and then loading in the appropriate data. Regards, Phil

    Read the article

  • SQL query error while trying to put a file in the database

    - by DaGhostman Dimitrov
    Hey Guys I have a big problem that I have no Idea why.. I have few forms that upload files to the database, all of them work fine except one.. I use the same query in all(simple insert). I think that it has something to do with the files i am trying to upload but I am not sure. Here is the code: if ($_POST['action'] == 'hndlDocs') { $ref = $_POST['reference']; // Numeric value of $doc = file_get_contents($_FILES['doc']['tmp_name']); $link = mysqli_connect('localhost','XXXXX','XXXXX','documents'); mysqli_query($link,"SET autocommit = 0"); $query = "INSERT INTO documents ({$ref}, '{$doc}', '{$_FILES['doc']['type']}') ;"; mysqli_query($link,$query); if (mysqli_error($link)) { var_dump(mysqli_error($link)); mysqli_rollback($link); } else { print("<script> window.history.back(); </script>"); mysqli_commit($link); } } The database has only these fields: DATABASE documents ( reference INT(5) NOT NULL, //it is unsigned zerofill doc LONGBLOB NOT NULL, //this should contain the binary data mime_type TEXT NOT NULL // the mime type of the file php allows only application/pdf and image/jpeg ); And the error I get is : You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '00001, '????' at line 1 I will appreciate every help. Cheers!

    Read the article

  • reading the file name from user input in MIPS assembly

    - by Hassan Al-Jeshi
    I'm writing a MIPS assembly code that will ask the user for the file name and it will produce some statistics about the content of the file. However, when I hard code the file name into a variable from the beginning it works just fine, but when I ask the user to input the file name it does not work. after some debugging, I have discovered that the program adds 0x00 char and 0x0a char (check asciitable.com) at the end of user input in the memory and that's why it does not open the file based on the user input. anyone has any idea about how to get rid of those extra chars, or how to open the file after getting its name from the user?? here is my complete code (it is working fine except for the file name from user thing, and anybody is free to use it for any purpose he/she wants to): .data fin: .ascii "" # filename for input msg0: .asciiz "aaaa" msg1: .asciiz "Please enter the input file name:" msg2: .asciiz "Number of Uppercase Char: " msg3: .asciiz "Number of Lowercase Char: " msg4: .asciiz "Number of Decimal Char: " msg5: .asciiz "Number of Words: " nline: .asciiz "\n" buffer: .asciiz "" .text #----------------------- li $v0, 4 la $a0, msg1 syscall li $v0, 8 la $a0, fin li $a1, 21 syscall jal fileRead #read from file move $s1, $v0 #$t0 = total number of bytes li $t0, 0 # Loop counter li $t1, 0 # Uppercase counter li $t2, 0 # Lowercase counter li $t3, 0 # Decimal counter li $t4, 0 # Words counter loop: bge $t0, $s1, end #if end of file reached OR if there is an error in the file lb $t5, buffer($t0) #load next byte from file jal checkUpper #check for upper case jal checkLower #check for lower case jal checkDecimal #check for decimal jal checkWord #check for words addi $t0, $t0, 1 #increment loop counter j loop end: jal output jal fileClose li $v0, 10 syscall fileRead: # Open file for reading li $v0, 13 # system call for open file la $a0, fin # input file name li $a1, 0 # flag for reading li $a2, 0 # mode is ignored syscall # open a file move $s0, $v0 # save the file descriptor # reading from file just opened li $v0, 14 # system call for reading from file move $a0, $s0 # file descriptor la $a1, buffer # address of buffer from which to read li $a2, 100000 # hardcoded buffer length syscall # read from file jr $ra output: li $v0, 4 la $a0, msg2 syscall li $v0, 1 move $a0, $t1 syscall li $v0, 4 la $a0, nline syscall li $v0, 4 la $a0, msg3 syscall li $v0, 1 move $a0, $t2 syscall li $v0, 4 la $a0, nline syscall li $v0, 4 la $a0, msg4 syscall li $v0, 1 move $a0, $t3 syscall li $v0, 4 la $a0, nline syscall li $v0, 4 la $a0, msg5 syscall addi $t4, $t4, 1 li $v0, 1 move $a0, $t4 syscall jr $ra checkUpper: blt $t5, 0x41, L1 #branch if less than 'A' bgt $t5, 0x5a, L1 #branch if greater than 'Z' addi $t1, $t1, 1 #increment Uppercase counter L1: jr $ra checkLower: blt $t5, 0x61, L2 #branch if less than 'a' bgt $t5, 0x7a, L2 #branch if greater than 'z' addi $t2, $t2, 1 #increment Lowercase counter L2: jr $ra checkDecimal: blt $t5, 0x30, L3 #branch if less than '0' bgt $t5, 0x39, L3 #branch if greater than '9' addi $t3, $t3, 1 #increment Decimal counter L3: jr $ra checkWord: bne $t5, 0x20, L4 #branch if 'space' addi $t4, $t4, 1 #increment words counter L4: jr $ra fileClose: # Close the file li $v0, 16 # system call for close file move $a0, $s0 # file descriptor to close syscall # close file jr $ra Note: I'm using MARS Simulator, if that makes any different

    Read the article

  • C#: System.Collections.Concurrent.ConcurrentQueue vs. Queue

    - by James Michael Hare
    I love new toys, so of course when .NET 4.0 came out I felt like the proverbial kid in the candy store!  Now, some people get all excited about the IDE and it’s new features or about changes to WPF and Silver Light and yes, those are all very fine and grand.  But me, I get all excited about things that tend to affect my life on the backside of development.  That’s why when I heard there were going to be concurrent container implementations in the latest version of .NET I was salivating like Pavlov’s dog at the dinner bell. They seem so simple, really, that one could easily overlook them.  Essentially they are implementations of containers (many that mirror the generic collections, others are new) that have either been optimized with very efficient, limited, or no locking but are still completely thread safe -- and I just had to see what kind of an improvement that would translate into. Since part of my job as a solutions architect here where I work is to help design, develop, and maintain the systems that process tons of requests each second, the thought of extremely efficient thread-safe containers was extremely appealing.  Of course, they also rolled out a whole parallel development framework which I won’t get into in this post but will cover bits and pieces of as time goes by. This time, I was mainly curious as to how well these new concurrent containers would perform compared to areas in our code where we manually synchronize them using lock or some other mechanism.  So I set about to run a processing test with a series of producers and consumers that would be either processing a traditional System.Collections.Generic.Queue or a System.Collection.Concurrent.ConcurrentQueue. Now, I wanted to keep the code as common as possible to make sure that the only variance was the container, so I created a test Producer and a test Consumer.  The test Producer takes an Action<string> delegate which is responsible for taking a string and placing it on whichever queue we’re testing in a thread-safe manner: 1: internal class Producer 2: { 3: public int Iterations { get; set; } 4: public Action<string> ProduceDelegate { get; set; } 5: 6: public void Produce() 7: { 8: for (int i = 0; i < Iterations; i++) 9: { 10: ProduceDelegate(“Hello”); 11: } 12: } 13: } Then likewise, I created a consumer that took a Func<string> that would read from whichever queue we’re testing and return either the string if data exists or null if not.  Then, if the item doesn’t exist, it will do a 10 ms wait before testing again.  Once all the producers are done and join the main thread, a flag will be set in each of the consumers to tell them once the queue is empty they can shut down since no other data is coming: 1: internal class Consumer 2: { 3: public Func<string> ConsumeDelegate { get; set; } 4: public bool HaltWhenEmpty { get; set; } 5: 6: public void Consume() 7: { 8: bool processing = true; 9: 10: while (processing) 11: { 12: string result = ConsumeDelegate(); 13: 14: if(result == null) 15: { 16: if (HaltWhenEmpty) 17: { 18: processing = false; 19: } 20: else 21: { 22: Thread.Sleep(TimeSpan.FromMilliseconds(10)); 23: } 24: } 25: else 26: { 27: DoWork(); // do something non-trivial so consumers lag behind a bit 28: } 29: } 30: } 31: } Okay, now that we’ve done that, we can launch threads of varying numbers using lambdas for each different method of production/consumption.  First let's look at the lambdas for a typical System.Collections.Generics.Queue with locking: 1: // lambda for putting to typical Queue with locking... 2: var productionDelegate = s => 3: { 4: lock (_mutex) 5: { 6: _mutexQueue.Enqueue(s); 7: } 8: }; 9:  10: // and lambda for typical getting from Queue with locking... 11: var consumptionDelegate = () => 12: { 13: lock (_mutex) 14: { 15: if (_mutexQueue.Count > 0) 16: { 17: return _mutexQueue.Dequeue(); 18: } 19: } 20: return null; 21: }; Nothing new or interesting here.  Just typical locks on an internal object instance.  Now let's look at using a ConcurrentQueue from the System.Collections.Concurrent library: 1: // lambda for putting to a ConcurrentQueue, notice it needs no locking! 2: var productionDelegate = s => 3: { 4: _concurrentQueue.Enqueue(s); 5: }; 6:  7: // lambda for getting from a ConcurrentQueue, once again, no locking required. 8: var consumptionDelegate = () => 9: { 10: string s; 11: return _concurrentQueue.TryDequeue(out s) ? s : null; 12: }; So I pass each of these lambdas and the number of producer and consumers threads to launch and take a look at the timing results.  Basically I’m timing from the time all threads start and begin producing/consuming to the time that all threads rejoin.  I won't bore you with the test code, basically it just launches code that creates the producers and consumers and launches them in their own threads, then waits for them all to rejoin.  The following are the timings from the start of all threads to the Join() on all threads completing.  The producers create 10,000,000 items evenly between themselves and then when all producers are done they trigger the consumers to stop once the queue is empty. These are the results in milliseconds from the ordinary Queue with locking: 1: Consumers Producers 1 2 3 Time (ms) 2: ---------- ---------- ------ ------ ------ --------- 3: 1 1 4284 5153 4226 4554.33 4: 10 10 4044 3831 5010 4295.00 5: 100 100 5497 5378 5612 5495.67 6: 1000 1000 24234 25409 27160 25601.00 And the following are the results in milliseconds from the ConcurrentQueue with no locking necessary: 1: Consumers Producers 1 2 3 Time (ms) 2: ---------- ---------- ------ ------ ------ --------- 3: 1 1 3647 3643 3718 3669.33 4: 10 10 2311 2136 2142 2196.33 5: 100 100 2480 2416 2190 2362.00 6: 1000 1000 7289 6897 7061 7082.33 Note that even though obviously 2000 threads is quite extreme, the concurrent queue actually scales really well, whereas the traditional queue with simple locking scales much more poorly. I love the new concurrent collections, they look so much simpler without littering your code with the locking logic, and they perform much better.  All in all, a great new toy to add to your arsenal of multi-threaded processing!

    Read the article

  • Unable to use factory girl with Cucumber and rails 3 (bundler problem)

    - by jbpros
    Hi there, I'm trying to run cucumber features with factory girl factories on a fresh Rails 3 application. Here is my Gemfile: source "http://gemcutter.org" gem "rails", "3.0.0.beta" gem "pg" gem "factory_girl", :git => "git://github.com/thoughtbot/factory_girl.git", :branch => "rails3" gem "rspec-rails", ">= 2.0.0.beta.4" gem "capybara" gem "database_cleaner" gem "cucumber-rails", :require => false Then the bundle install commande just runs smoothly: $ bundle install /usr/lib/ruby/gems/1.8/gems/bundler-0.9.3/lib/bundler/installer.rb:81:Warning: Gem::Dependency#version_requirements is deprecated and will be removed on or after August 2010. Use #requirement Updating git://github.com/thoughtbot/factory_girl.git Fetching source index from http://gemcutter.org Resolving dependencies Installing abstract (1.0.0) from system gems Installing actionmailer (3.0.0.beta) from system gems Installing actionpack (3.0.0.beta) from system gems Installing activemodel (3.0.0.beta) from system gems Installing activerecord (3.0.0.beta) from system gems Installing activeresource (3.0.0.beta) from system gems Installing activesupport (3.0.0.beta) from system gems Installing arel (0.2.1) from system gems Installing builder (2.1.2) from system gems Installing bundler (0.9.13) from system gems Installing capybara (0.3.6) from system gems Installing cucumber (0.6.3) from system gems Installing cucumber-rails (0.3.0) from system gems Installing culerity (0.2.9) from system gems Installing database_cleaner (0.5.0) from system gems Installing diff-lcs (1.1.2) from system gems Installing erubis (2.6.5) from system gems Installing factory_girl (1.2.3) from git://github.com/thoughtbot/factory_girl.git (at rails3) Installing ffi (0.6.3) from system gems Installing i18n (0.3.6) from system gems Installing json_pure (1.2.3) from system gems Installing mail (2.1.3) from system gems Installing memcache-client (1.7.8) from system gems Installing mime-types (1.16) from system gems Installing nokogiri (1.4.1) from system gems Installing pg (0.9.0) from system gems Installing polyglot (0.3.0) from system gems Installing rack (1.1.0) from system gems Installing rack-mount (0.4.7) from system gems Installing rack-test (0.5.3) from system gems Installing rails (3.0.0.beta) from system gems Installing railties (3.0.0.beta) from system gems Installing rake (0.8.7) from system gems Installing rspec (2.0.0.beta.4) from system gems Installing rspec-core (2.0.0.beta.4) from system gems Installing rspec-expectations (2.0.0.beta.4) from system gems Installing rspec-mocks (2.0.0.beta.4) from system gems Installing rspec-rails (2.0.0.beta.4) from system gems Installing selenium-webdriver (0.0.17) from system gems Installing term-ansicolor (1.0.5) from system gems Installing text-format (1.0.0) from system gems Installing text-hyphen (1.0.0) from system gems Installing thor (0.13.4) from system gems Installing treetop (1.4.4) from system gems Installing tzinfo (0.3.17) from system gems Installing webrat (0.7.0) from system gems Your bundle is complete! When I run cucumber, here is the error I get: $ rake cucumber (in /home/jbpros/projects/deorbitburn) /usr/lib/ruby/gems/1.8/gems/bundler-0.9.3/lib/bundler/resolver.rb:97:Warning: Gem::Dependency#version_requirements is deprecated and will be removed on or after August 2010. Use #requirement NOTICE: CREATE TABLE will create implicit sequence "posts_id_seq" for serial column "posts.id" NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "posts_pkey" for table "posts" /usr/bin/ruby1.8 -I "/usr/lib/ruby/gems/1.8/gems/cucumber-0.6.3/lib:lib" "/usr/lib/ruby/gems/1.8/gems/cucumber-0.6.3/bin/cucumber" --profile default Using the default profile... git://github.com/thoughtbot/factory_girl.git (at rails3) is not checked out. Please run `bundle install` (Bundler::PathError) /home/jbpros/.bundle/gems/bundler-0.9.13/lib/bundler/source.rb:282:in `load_spec_files' /home/jbpros/.bundle/gems/bundler-0.9.13/lib/bundler/source.rb:190:in `local_specs' /home/jbpros/.bundle/gems/bundler-0.9.13/lib/bundler/environment.rb:36:in `runtime_gems' /home/jbpros/.bundle/gems/bundler-0.9.13/lib/bundler/environment.rb:35:in `each' /home/jbpros/.bundle/gems/bundler-0.9.13/lib/bundler/environment.rb:35:in `runtime_gems' /home/jbpros/.bundle/gems/bundler-0.9.13/lib/bundler/index.rb:5:in `build' /home/jbpros/.bundle/gems/bundler-0.9.13/lib/bundler/environment.rb:34:in `runtime_gems' /home/jbpros/.bundle/gems/bundler-0.9.13/lib/bundler/environment.rb:14:in `index' /home/jbpros/.bundle/gems/bundler-0.9.13/lib/bundler/index.rb:5:in `build' /home/jbpros/.bundle/gems/bundler-0.9.13/lib/bundler/environment.rb:13:in `index' /home/jbpros/.bundle/gems/bundler-0.9.13/lib/bundler/environment.rb:55:in `resolve_locally' /home/jbpros/.bundle/gems/bundler-0.9.13/lib/bundler/environment.rb:28:in `specs' /home/jbpros/.bundle/gems/bundler-0.9.13/lib/bundler/environment.rb:65:in `specs_for' /home/jbpros/.bundle/gems/bundler-0.9.13/lib/bundler/environment.rb:23:in `requested_specs' /home/jbpros/.bundle/gems/bundler-0.9.13/lib/bundler/runtime.rb:18:in `setup' /home/jbpros/.bundle/gems/bundler-0.9.13/lib/bundler.rb:68:in `setup' /home/jbpros/projects/deorbitburn/config/boot.rb:7 /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require' /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:31:in `polyglot_original_require' /usr/lib/ruby/gems/1.8/gems/polyglot-0.3.0/lib/polyglot.rb:65:in `require' /home/jbpros/projects/deorbitburn/config/application.rb:1 /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require' /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:31:in `polyglot_original_require' /usr/lib/ruby/gems/1.8/gems/polyglot-0.3.0/lib/polyglot.rb:65:in `require' /home/jbpros/projects/deorbitburn/config/environment.rb:2 /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require' /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:31:in `polyglot_original_require' /usr/lib/ruby/gems/1.8/gems/polyglot-0.3.0/lib/polyglot.rb:65:in `require' /home/jbpros/projects/deorbitburn/features/support/env.rb:8 /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require' /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:31:in `polyglot_original_require' /usr/lib/ruby/gems/1.8/gems/polyglot-0.3.0/lib/polyglot.rb:65:in `require' /usr/lib/ruby/gems/1.8/gems/cucumber-0.6.3/bin/../lib/cucumber/rb_support/rb_language.rb:124:in `load_code_file' /usr/lib/ruby/gems/1.8/gems/cucumber-0.6.3/bin/../lib/cucumber/step_mother.rb:85:in `load_code_file' /usr/lib/ruby/gems/1.8/gems/cucumber-0.6.3/bin/../lib/cucumber/step_mother.rb:77:in `load_code_files' /usr/lib/ruby/gems/1.8/gems/cucumber-0.6.3/bin/../lib/cucumber/step_mother.rb:76:in `each' /usr/lib/ruby/gems/1.8/gems/cucumber-0.6.3/bin/../lib/cucumber/step_mother.rb:76:in `load_code_files' /usr/lib/ruby/gems/1.8/gems/cucumber-0.6.3/bin/../lib/cucumber/cli/main.rb:48:in `execute!' /usr/lib/ruby/gems/1.8/gems/cucumber-0.6.3/bin/../lib/cucumber/cli/main.rb:20:in `execute' /usr/lib/ruby/gems/1.8/gems/cucumber-0.6.3/bin/cucumber:8 rake aborted! Command failed with status (1): [/usr/bin/ruby1.8 -I "/usr/lib/ruby/gems/1....] (See full trace by running task with --trace) Do I have to do something special for bundler to check out factory girl's repository on github?

    Read the article

  • Unable to mount external hard drive - Damaged file system and MFT

    - by Khalifa Abbas Lame
    I get the following error when i try to mount my external hard drive. UNABLE TO MOUNT Error mounting /dev/sdc1 at /media/khalibloo/Khalibloo2: Command-line `mount -t "ntfs" -o "uhelper=udisks2,nodev,nosuid,uid=1000,gid=1000,dmask=0077,fmask=0177" "/dev/sdc1" "/media/khalibloo/Khalibloo2"' exited with non-zero exit status 13: ntfs_attr_pread_i: ntfs_pread failed: Input/output error Failed to read of MFT, mft=6 count=1 br=-1: Input/output error Failed to open inode FILE_Bitmap: Input/output error Failed to mount '/dev/sdc1': Input/output error NTFS is either inconsistent, or there is a hardware fault, or it's a SoftRAID/FakeRAID hardware. In the first case run chkdsk /f on Windows then reboot into Windows twice. The usage of the /f parameter is very important! If the device is a SoftRAID/FakeRAID then first activate it and mount a different device under the /dev/mapper/ directory, (e.g. /dev/mapper/nvidia_eahaabcc1). Please see the 'dmraid' documentation for more details. It doesn't mount on windows either: "I/O Device error" it's an ntfs hard drive with a single partition Of course, i tried chkdsk /f. it reported several file segments as unreadable, but didn't say whether it fixed them or not (apparently not). also tried with the /b flag. ntfsfix reported the volume as corrupt. TestDisk was able to fix a small error with the partition table by adding the "80" flag for the active (only) partition. TestDisk also confirmed that the boot sector was fine and it matched the backup. However, when attempting to repair the MFT, it couldn't read the MFT. It also couldn't list the files on the hard drive. It says file system may be damaged. Active@ also shows that MFT is missing or corrupt. So how do i fix the file system? or the MFT?

    Read the article

  • Add entries to Nautilus' right-click menu (copy, move to arbitrary directories)

    - by qbi
    Assume I want to copy a file from /home/foo/bar/baz to /opt/quuz/dir1/option3. When I try it with Nautilus, first I have to open the correct directory, copy the file, go to the other directory and paste it there. I was thinking of a better way and old KDE3 versions of Konqueror came to mind. It was possible to right-click on a file. The context menu had an option for copying, moving the file to some default directories. Furthermore you could select any directory under /. So for the above action one would right click on a file, select /opt first, a list of subdirectories will open, select /opt/quuz and so on. Using GNOME there are only two possible values (home and desktop). Is there any way to insert more directories to this context menu in GNOME? Can I copy somehow the behaviour of Konqueror?

    Read the article

< Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >