Search Results

Search found 16489 results on 660 pages for 'personal folder'.

Page 261/660 | < Previous Page | 257 258 259 260 261 262 263 264 265 266 267 268  | Next Page >

  • How to set source path of image within html pages to show in webbrowser control

    - by Royson
    Hi in my application there is web browser control to show some static html pages. The pages are displayed properly. but images are not displayed.. I tried with changing src-path but no success. my htmlpages folder is located at bin folder. And i am assigning it as. FileStream source = new FileStream(@"..\HtmlPages\supportHtml.html", FileMode.Open, FileAccess.Read); if i open html files in browser, the images are displayed properly.. So, What is the correct path for images..?? If i set full path to src attribute of <img> tag..it works. but i think its not a proper way. :( EDIT: If i assign d:\myapp\bin\HtmlPages\support.gif then image is displayed. And if i assign "..\HtmlPages\support.gif" or "support.gif" image is not shown.

    Read the article

  • Recursive N-way merge/diff algorithm for directory trees?

    - by BobMcGee
    What algorithms or Java libraries are available to do N-way, recursive diff/merge of directories? I need to be able to generate a list of folder trees that have many identical files, and have subdirectories with many similar files. I want to be able to use 2-way merge operations to quickly remove as much redundancy as possible. Goals: Find pairs of directories that have many similar files between them. Generate short list of directory pairs that can be synchronized with 2-way merge to eliminate duplicates Should operate recursively (there may be nested duplicates of higher-level directories) Run time and storage should be O(n log n) in numbers of directories and files Should be able to use an embedded DB or page to disk for processing more files than fit in memory (100,000+). Optional: generate an ancestry and change-set between folders Optional: sort the merge operations by how many duplicates they can elliminate I know how to use hashes to find duplicate files in roughly O(n) space, but I'm at a loss for how to go from this to finding partially overlapping sets between folders and their children. EDIT: some clarification The tricky part is the difference between "exact same" contents (otherwise hashing file hashes would work) and "similar" (which will not). Basically, I want to feed this algorithm at a set of directories and have it return a set of 2-way merge operations I can perform in order to reduce duplicates as much as possible with as few conflicts possible. It's effectively constructing an ancestry tree showing which folders are derived from each other. The end goal is to let me incorporate a bunch of different folders into one common tree. For example, I may have a folder holding programming projects, and then copy some of its contents to another computer to work on it. Then I might back up and intermediate version to flash drive. Except I may have 8 or 10 different versions, with slightly different organizational structures or folder names. I need to be able to merge them one step at a time, so I can chose how to incorporate changes at each step of the way. This is actually more or less what I intend to do with my utility (bring together a bunch of scattered backups from different points in time). I figure if I can do it right I may as well release it as a small open source util. I think the same tricks might be useful for comparing XML trees though.

    Read the article

  • checking crc32 of a file

    - by agent154
    This is not really a "how to" question. Is there a "standard" file structure that applications use to store the checksums of files in a folder? I'm developing a tool to check various things like crc32, md5, sha1, sha256, etc... I'd like to have my program store the various hashes in files in the folder of what I'm checking. I know that there is a file commonly used called 'md5sums' or 'sha1sums'. But what about CRC? I haven't noticed any around. And if there is, what's the structure of it? Thanks.

    Read the article

  • Problems with ASP.NET State Service version; state service is 1.1, website is 3.5

    - by Mick Byrne
    Hi there, I have a ASP.NET 3.5 website running on Windows Server 2003 and I'm using the ASP.NET State Service to manage sessions. It will appear to be working then I regularly get an error saying my code needs to have version 2.0 of the State Service running to work (I think that's what it said, I've temporarily switched back to storing sessions InProc). Refresh the page and the error goes away (for a bit, it's bound to come back). So I looked at the properties of the ASP.NET State Service in the Services interface and it's mapping to a .exe in the 1.1 framework folder: C:\WINDOWS\Microsoft.NET\Framework\v1.1.4322\aspnet_state.exe There's a corresponding version in the 2.0 framework folder, but I don't know how to add it as a new service. I'm also not sure that adding the 2.0 version (and stopping and/or removing the 1.1 version) will solve the problem. Thanks in advance for any help anyone can provide. Mick

    Read the article

  • Own data format for the iPhone

    - by Stefan
    Hi, I would like to create my own data format for an iPhone app. The files should be similar structured as e.g. Apple's iWork files (.pages). That means, I have a folder with some files in it: The file 'Juicy.fruit' contains: Fruits ---> Apple.xml ---> Banana.xml ---> Pear.xml ---> PreviewPicture.png This folder "Fruits" should be packed in a handy file 'Juicy.fruit'. Compression isn't necessary. How could I achieve this? I've discovered some open source ZIP-libraries. However, I would like to to build my own data format with the iPhones built-in libs (if possible). Best regards, Stefan

    Read the article

  • What are the permissions I need?

    - by Eric
    My folder at: /usr/local/www/.ext_env_vars has a bunch of files in it that my app needs to read. The user is 'webapp' So, I changed the perms like so: chmod -R 400 .ext_env_vars chown -R webapp.webapp .ext_env_vars The application can't read these. However, when I chmod 777, they are read by the app. So, it isn't that I have a path problem. Seems to be permissions only. So, what would I have to do to the permissions to make webapp be able to read those files in the .ext_env_vars folder? Thanks Eric

    Read the article

  • WPF/MVVM - should we create a different Class for each ViewModel ?

    - by FMFF
    I'm attempting the example from the excellent "How Do I" video for MVVM by Todd Miranda found in MSDN. I'm trying to adapt the example for my learning purpose. In the example, he has a ViewModel called EmployeeListViewModel. Now if I want to include Departments, should I create another ViewModel such as DepartmentListViewModel? The example has EmployeeRepository as the Data Source. In my case, I'm trying to use an Entity object as the datasource (Employees.edmx in Model folder and EmployeeRepository.cs in DataAccess folder). If I want to display the list of Departments, should I create a separate class called DepartmentRepository and put all department related method definitions there? What if I want to retrieve the employee name and their department's name together? Where should I place the methods for this? I'm very new to WPF and MVVM and please let me know if any of the above needs to be re-phrased. Thank you for all the help.

    Read the article

  • In an ASP.NET Web Site Project, what is the best option to manage modular widgets

    - by Juan Sagasti
    We are creating a modular Web Site Project and all the UI is based on simple html pages with javascript widgets. These widgets obtain the data from a WCF class located in the app_code folder. Each widget has 3 files [ js, css , cs] and an image directory: , and we want all these files to be located under the same directory, say: /widgets/mywidget/ The problem is that the .cs file needs to be in the app_code folder to be compiled dynamically and that would break our widget modularity forcing the installator to distribute the widget code in two places. One of the options is use compiled assemblies instead of .cs files and use Assembly.Load() or smthg similar to load them when needed. What other options do you see?

    Read the article

  • running an RMI server in command line and eclipse

    - by Noona
    I need to run my RMI server using the command line, my class files reside in this folder: C:\workspace\distributedhw2\AgencyServers\RmiEncodingServer\RmiServerClasses in package hw2.rmi.server The code base reside in this folder: C:\workspace\distributedhw2\AgencyServers\RmiEncodingServer\RmiServerCodeBase in package hw2.rmi.server I use the command line: java –classpath C:\workspace\distributedhw2\AgencyServers\RmiEncodingServer\RmiServerClasses\ -Djava.rmi.server.codebase=file:/C:\workspace\distributedhw2\AgencyServers\RmiEncodingServer\ Djava.security.policy=c:\HW2\permissions.policy hw2.rmi.server.RmiEncodingServer but I get a "class not found" exception as follows: Exception in thread "main" java.lang.NoClassDefFoundError: ûclasspath Caused by: java.lang.ClassNotFoundException: ûclasspath at java.net.URLClassLoader$1.run(Unknown Source) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at sun.misc.Launcher$AppClassLoader.loadClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) Could not find the main class: GÇôclasspath. Program will exit. where have I gone wrong? also, if you can provide instructions on how to run the server in eclipse, I added the following as a VM argument, but I get a class not found exception to a class that is in the RmiServerCodeBase: -Djava.security.policy=C:\workspace\distributedhw2\permissions.policy -Djava.rmi.server.codebase=file:/C:\workspace\distributedhw2\AgencyServers\RmiEncodingServer thanks

    Read the article

  • Visual Studio 2008 adding incorrect working folders to TFS Workspace

    - by Bryan Rowe
    I am using Visual Studio 2008 with TFS. I have one workspace set up with one working folder. I map the root source control folder $/ to C:\TFS and get all code. When working on any project under the root, Visual Studio will randomly add incorrectly mapped working folders to my workspace. For example, it might map $/WebProject/ to C:\TFS\WebProject\DataAccess -- where the real files exist at C:\TFS\WebProject. Once it incorrectly adds these working folders, I can no longer open the solution. I am forced to remove the working folders that Visual Studio added and get latest from TFS. Has anyone experienced this? Is there something I can do to avoid running into this?

    Read the article

  • WNetAddConnection2 from a Windows Service

    - by Flavio
    I'm trying to connect to a remote password protected shared folder from a Windows service, which runs as LocalSystem account. It seems that the LocalSystem account is unable to directly access password-protected network shares using WNetAddConnection2() or similar calls. Can anyone confirm this? I've read that impersonating an administrator user might be the way to go. I've tried using LogonUser() and ImpersonateLoggedOnUser() before WNetAddConnection2(), it appears that the mount of the network path succeeds, but then actual accesses (e.g. enumerating of files in remote folder) fail. Any ideas? Thanks.

    Read the article

  • Rail 3 custom renderer: where do put this code?

    - by Derick Bailey
    I'm following along with Yehuda's example on how to build a custom renderer for Rails 3, according to this post: http://www.engineyard.com/blog/2010/render-options-in-rails-3/ I've got my code working, but I'm having a hard time figuring out where this code should live. Right now, I've got my code stuck right inside of my controller file. Doing this, everything works. When I move the code to the lib folder, though, I have explicitly 'require' my file in the controller that needs the renderer or it won't work. Yes, the file gets loaded when it sits in the lib folder, automatically. but the code to add the renderer isn't working for some reason, until I do a require on it. where should I put my code to add the renderer and mime type, so that rails 3 will pick it up and register it for me, without me having to manually require the file in my controller?

    Read the article

  • Outlook Macro not executing

    - by Tim
    Hello all, we are using an an outlook macro for incoming emails to unzip the attachments, log them in sql server and copy them to a special working folder where a windows service processes them. The problems are: the user must be logged in at the server otherwise the macro wont run. I think there is no workaround for this. whenever a modal dialog pops up in outlook and waits for user input, the macro wont execute either (f.e. AutoArchive) I have stopped AutoArchiving all x-days in the options. Because its a productive environment, are there other possible windows preventing the macro to execute, which i must disable now? Our outlook-macro solution surely is not the best because of above problems. Are there better alternatives to read emails, unzip attachments and move them to a working folder? My language is VB.Net and server OS is windows 2008. Regards, Tim

    Read the article

  • SVN hook script conflict

    - by user297303
    I am trying to write a pre-commit hook script that will alter a specific svn-property of a folder/file. The script looks fairly similar to the one that is documented in the svn book. I figured out how to set/change the property of a node and when executing the binding function svn.fs.commit_txn the property of the node actually gets set. But at the moment tortoise always gives me a conflict on the folder I am altering the property. I wrote my script with Python but am new python and hook scripts. Hope someone can give me a clue why I am getting this conflict..

    Read the article

  • Build Machine Configuration Recommendations?

    - by IPX Ares
    We have a new build machine to start using for our programming team. We are still trying to figure out how we want to organize everything to get the best configuration for building EXEs and DLLs. We are using VB6 and VB.Net 2005, and VSS2005. We were thinking of making working folders set for each project, release and support tickets. Does anyone have experience with a similar set up? What were your likes/dislikes? Any recommendations (New VSS IDs, folder configuration, setting working folder, updating/building files)?

    Read the article

  • Storing And Using Microsoft User Account Credentials in MS SQL Srv 2008 Database

    - by instantmusic
    I'm not exactly positive how to word this for the sake of the title so please forgive me. Also I can't seem to figure out how to even google this question, so I'm hoping that I can get a lead in the right direction. Part of my software(VB.NET App) requires the ability to access/read/write a shared network folder. I have an option for the user to specify any credentials that might be needed to access said folder. I want to store these credentials given in the MS SQL Database as part of the config(I have a table which contains configuration). My concern is that the password for the user account will be unencrpyted. Yet, if I encrypt the password the VB.NET App And/Or database will be unable to use the credentials for file i/o operations unless the Password is unencrypted before use. I'm fishing for suggestions on how to better handle this situation.

    Read the article

  • How do I create a shortcut to CMD.EXE that asks for elevation using INNO Setup?

    - by Maltrap
    Hi, using INNO Setup I currently have the following entry under the [ICONS] section: Name: "{group}\My App\My App - Command Prompt"; Filename: "cmd.exe"; WorkingDir: "{app}" This shortcut launches a command prompt straight into my application's folder. Unfortunately it isn't launched as elevated which means the commands the user runs from there doesn't have appropriate rights. Using INNO Setup, how can I create a shortcut to CMD.exe (in a specific folder) that requires elevation? Doing this for other applications can be done via a manifest file. My question is, how do I use it using INNO, and if I can't, what are my alternatives?

    Read the article

  • Disaster, or Migration?

    - by Rob Farley
    This post is in two parts – technical and personal. And I should point out that it’s prompted in part by this month’s T-SQL Tuesday, hosted by Allen Kinsel. First, the technical: I’ve had a few conversations with people recently about migration – moving a SQL Server database from one box to another (sometimes, but not primarily, involving an upgrade). One question that tends to come up is that of downtime. Obviously there will be some period of time between the old server being available and the new one. The way that most people seem to think of migration is this: Build a new server. Stop people from using the old server. Take a backup of the old server Restore it on the new server. Reconfigure the client applications (or alternatively, configure the new server to use the same address as the old) Make the new server online. There are other things involved, such as testing, of course. But this is essentially the process that people tell me they’re planning to follow. The bit that I want to look at today (as you’ve probably guessed from my title) is the “backup and restore” section. If a SQL database is using the Simple Recovery Model, then the only restore option is the last database backup. This backup could be full or differential. The transaction log never gets backed up in the Simple Recovery Model. Instead, it truncates regularly to stay small. One that’s using the Full Recovery Model (or Bulk-Logged) won’t truncate its log – the log must be backed up regularly. This provides the benefit of having a lot more option available for restores. It’s a requirement for most systems of High Availability, because if you’re making sure that a spare box is up-and-running, ready to take over, then you have to be interested in the logs that are happening on the current box, rather than truncating them all the time. A High Availability system such as Mirroring, Replication or Log Shipping will initialise the spare machine by restoring a full database backup (and maybe a differential backup if available), and then any subsequent log backups. Once the secondary copy is close, transactions can be applied to keep the two in sync. The main aspect of any High Availability system is to have a redundant system that is ready to take over. So the similarity for migration should be obvious. If you need to move a database from one box to another, then introducing a High Availability mechanism can help. By turning on the Full Recovery Model and then taking a backup (so that the now-interesting logs have some context), logs start being kept, and are therefore available for getting the new box ready (even if it’s an upgraded version). When the migration is ready to occur, a failover can be done, letting the new server take over the responsibility of the old, just as if a disaster had happened. Except that this is a planned failover, not a disaster at all. There’s a fine line between a disaster and a migration. Failovers can be useful in patching, upgrading, maintenance, and more. Hopefully, even an unexpected disaster can be seen as just another failover, and there can be an opportunity there – perhaps to get some work done on the principal server to increase robustness. And if I’ve just set up a High Availability system for even the simplest of databases, it’s not necessarily a bad thing. :) So now the personal: It’s been an interesting time recently... June has been somewhat odd. A court case with which I was involved got resolved (through mediation). I can’t go into details, but my lawyers tell me that I’m allowed to say how I feel about it. The answer is ‘lousy’. I don’t regret pursuing it as long as I did – but in the end I had to make a decision regarding the commerciality of letting it continue, and I’m going to look forward to the days when the kind of money I spent on my lawyers is small change. Mind you, if I had a similar situation with an employer, I’d do the same again, but that doesn’t really stop me feeling frustrated about it. The following day I had to fly to country Victoria to see my grandmother, who wasn’t expected to last the weekend. She’s still around a week later as I write this, but her 92-year-old body has basically given up on her. She’s been a Christian all her life, and is looking forward to eternity. We’ll all miss her though, and it’s hard to see my family grieving. Then on Tuesday, I was driving back to the airport with my family to come home, when something really bizarre happened. We were travelling down the freeway, just pulled out to go past a truck (farm-truck sized, not a semi-trailer), when a car-sized mass of metal fell off it. It was something like an industrial air-conditioner, but from where I was sitting, it was just a mass of spinning metal, like something out of a movie (one friend described it as “holidays by Michael Bay”). Somehow, and I’m really don’t know how, the part of it nearest us bounced high enough to clear the car, and there wasn’t even a scratch. We pulled over the check, and I was just thanking God that we’d changed lanes when we had, and that we remained unharmed. I had all kinds of thoughts about what could’ve happened if we’d had something that size land on the windscreen... All this has drilled home that while I feel that I haven’t provided as well for the family as I could’ve done (like by pursuing an expensive legal case), I shouldn’t even consider that I have proper control over things. I get to live life, and make decisions based on what I feel is right at the time. But I’m not going to get everything right, and there will be things that feel like disasters, some which could’ve been in my control and some which are very much beyond my control. The case feels like something I could’ve pursued differently, a disaster that could’ve been avoided in some way. Gran dying is lousy of course. An accident on the freeway would have been awful. I need to recognise that the worst disasters are ones that I can’t affect, and that I need to look at things in context – perhaps seeing everything that happens as a migration instead. Life is never the same from one day to the next. Every event has a before and an after – sometimes it’s clearly positive, sometimes it’s not. I remember good events in my life (such as my wedding), and bad (such as the loss of my father when I was ten, or the back injury I had eight years ago). I’m not suggesting that I know how to view everything from the “God works all things for good” perspective, but I am trying to look at last week as a migration of sorts. Those things are behind me now, and the future is in God’s hands. Hopefully I’ve learned things, and will be able to live accordingly. I’ve come through this time now, and even though I’ll miss Gran, I’ll see her again one day, and the future is bright.

    Read the article

  • [ASP.NET] Problems with error: "Access to the path <path> is denied."

    - by Tony
    Hi, I was looking for the the trick to resolve that error (google, stackoverflow.com etc) and every nothing works. I need to dinamically create an .aspx file via the asp.net application. What I've done to try fix it: 1) In the folder's Properties - Security, I've added IUSR_TONY and also IIS_IUSRS and allow them the Full control to the folder. Just to check if that will help. Nope, it won't. 2) in the IIS Manager, I tried to change the Application's Pool Defaults Identity (based on that) I checked all options, with no success I don't know what to do more to fix it. Any ideas ?

    Read the article

  • Using .htaccess to replace backslash in URL with forward-slash

    - by DamienL
    I realise that a backslash should never appear in a URL in a form other than a URL escape code, however in this case the URL's are being generated by a .NET application for generating flashbooks. I have contacted the developer of this application with a bug report. In the interim i would like to use .htaccess to rewrite the offending backslashes. This is how the URLs appear in fiddler debugging proxy. www.example.com/folder/folder/thumbs%5C1.jpg I am using Firefox and it looks as though Firefox is translating them into the URL encoded equivalent ( \ == %5C1 ). Interestingly IE translates the backslash into a forward-slash automatically (not adhering to standards but convenient in this case). Is there a way to use .htaccess to rewrite all \ to /?

    Read the article

  • Still confuse parse JSON in GWT

    - by graybow
    Please help meee. I create a project named 'tesdb3' in eclipse. I create the PHP side to access the database, and made the output as JSON.. I create the userdata.php in folder war. then I compile tesdb3 project. Folder tesdb3 and the userdata.php in war moved in local server(I use WAMP). I put the PHP in folder tesdb3. This is the result from my localhost/phpmyadmin/tesdb3/userdata.php [{"kode":"002","nama":"bambang gentolet"},{"kode":"012","nama":"Algiz"}] From that result I think the PHP side was working good.Then I create UserData.java as JSNI overlay like this: package com.tesdb3.client; import com.google.gwt.core.client.JavaScriptObject; class UserData extends JavaScriptObject{ protected UserData() {} public final native String getKode() /*-{ return this.kode; }-*/; public final native String getNama() /*-{ return this.nama; }-*/; public final String getFullData() { return getKode() + ":" + getNama(); } } Then Finally in the tesdb3.java: public class Tesdb3 implements EntryPoint { String url= "http://localhost/phpmyadmin/tesdb3/datauser.php"; private native JsArray<UserData> getuserdata(String json) /*-{ return eval(json); }-*/; public void LoadData() throws RequestException{ RequestBuilder builder = new RequestBuilder(RequestBuilder.GET, URL.encode(url)); builder.sendRequest(null, new RequestCallback(){ @Override public void onError(Request request, Throwable exception) { Window.alert("error " + exception); } public void onResponseReceived(Request request, Response response) { Window.alert("betul" + response.getText()); //data(getuserdata(response.getText())); } }); } public void data(JsArray<UserData> data){ for (int i = 0; i < data.length(); i++) { String lkode =data.get(i).getKode(); String lname =data.get(i).getNama(); Label l = new Label(lkode+" "+lname); tb.setWidget(i, 0, l); } RootPanel.get().add(new HTML("my data")); RootPanel.get().add(tb); } public void onModuleLoad() { try { LoadData(); } catch (RequestException e) { } } } The result just showing string "my data". And the Window.alert(response.getText()) showing nothing. Whyy?

    Read the article

  • Monolog conversations in SQL Service Broker 2008

    - by hemil
    Hi, I have a scenario in which I need to process(in SQL Server) messages being delivered as .xml files in a folder in real time. I started investigating SQL Service Broker for my queuing needs. Basically, I want the Service Broker to pick up my .xml files and place them in a queue as they arrive in the folder. But, SQL Service Broker does not support "Monolog" conversations, at least not in the current version. It supports only a dialog between an initiator and a target service. I can use MSMQ but then I will have two things to maintain - the .Net Code for file processing in MSMQ and the SQL Server T-SQL stored procs. What options do I have left? Thanks.

    Read the article

  • app_offline not being respected?

    - by Jonas
    I'm doing some tests with deploying an application using the app_offline.htm functionality in asp.net. I've found that if I have a working application, and I put an app_offline.htm file in the root, and then rename the \bin folder, my app_offline.htm file does not get displayed. If I rename the bin folder back to "bin", my app_offline.htm file gets displayed as expected. I had assumed/thought that the presence of app_offline would supersede anything else that happens...am I mistaken? This is on Windows 7/IIS 7.5.

    Read the article

< Previous Page | 257 258 259 260 261 262 263 264 265 266 267 268  | Next Page >