Search Results

Search found 17847 results on 714 pages for 'virtual disk'.

Page 647/714 | < Previous Page | 643 644 645 646 647 648 649 650 651 652 653 654  | Next Page >

  • Why does File.Exists return false?

    - by Jonas Stawski
    I'm querying all images on the Android device as such: string[] columns = { MediaStore.Images.Media.InterfaceConsts.Data, MediaStore.Images.Media.InterfaceConsts.Id }; string orderBy = MediaStore.Images.Media.InterfaceConsts.Id; var imagecursor = ManagedQuery(MediaStore.Images.Media.ExternalContentUri, columns, null, null, orderBy); for (int i = 0; i < this.Count; i++) { imagecursor.MoveToPosition(i); Paths[i]= imagecursor.GetString(dataColumnIndex); Console.WriteLine(Paths[i]); Console.WriteLine(System.IO.File.Exists(Paths[i])); } The problem is that the output shows that some files don't exist. Here's a sample output: /storage/sdcard0/Download/On-Yom-Kippur-Jews-choose-different-shoes-VSETQJ6-x-large.jpg False /storage/sdcard0/Download/397277_10151250943161341_876027377_n.jpg False /storage/sdcard0/Download/Roxy_Cottontail_&_Melo-X_Present..._Some_Bunny_Love's_You.jpg False /storage/sdcard0/Download/album-The-Rolling-Stones-Some-Girls.jpg True /storage/sdcard0/Download/some-people-ust-dont-appreciate-fashion[1].jpg True /storage/sdcard0/Download/express.gif True ... /storage/sdcard0/Download/some-joys-are-expressed-better-in-silence.JPG False How is this possible? I downloaded these images myself from the internet! They should exist in disk.

    Read the article

  • CRONTAB doesn't finish svndump

    - by Andrew
    I just discovered that the automated dumps I've been creating of my SVN repository have been getting cut off early and basically only half the dump is there. It's not an emergency, but I hate being in this situation. It defeats the purpose of making automated backups in the first place. The command I'm using is below. If I execute it manually in the terminal, it completes fine; the output.txt file is 16 megs in size with all 335 revisions. But if I leave it to crontab, it bails at the halfway mark, at around 8.1 megs and only the first 169 revisions. # m h dom mon dow command 18 00 * * * svnadmin dump /var/svn/repos/myproject > /home/andrew/output.txt I actually save to a dated gzipped file, and there's no shortage of space on the server, so this is not a disk space issue. It seems to bail after two seconds, so this could be a time issue, but the file size is the same every single time for the past month, so I don't think it's that either. Does crontab execute within a limited memory space?

    Read the article

  • Android - Two different programs at the same time in an emulator

    - by Léa Massiot
    I'm new to Android development. My OS is WinXP. I'm trying to install two different applications on an Android Device Emulator in command line. I have two Android projects "ap1" and "ap2". In the "ap1" project directory, I ran "ant debug". I got an "ap1.apk" executable. In the "ap2" project directory, I ran "ant debug". I got an "ap2.apk" executable. I created an Android Virtual Device: cmd_line android create avd -n avd1 -t 1 --abi x86 I launched the emulator: cmd_line emulator -avd avd1 -verbose The "adb devices" command returns: List of devices attached emulator-5554 device I installed the first program on the emulator: cmd_line adb -s emulator-5554 install "ap1.apk" I ran the program: cmd_line adb shell am start -a android.intent.action.MAIN -n my.pkg.android/.Activity1 = It worked. I installed the second program on the emulator: cmd_line adb -s emulator-5554 install "ap2.apk" I ran the program: cmd_line adb shell am start -a android.intent.action.MAIN -n my.pkg2.android/.AnotherActivity1 = It worked. All this works except that the second executable "replaced" of the first one. If I try to run the first executable, I get an error: cmd_line adb shell am start -a android.intent.action.MAIN -n my.pkg.android/.Activity1 Starting: Intent { act=android.intent.action.MAIN cmp=my.pkg.android/.Activity1 } Error type 3 Error: Activity class {my.pkg.android/my.pkg.android.Activity1} does not exist. It looks like I can't have the two apps at the same time in the emulator. What do you think? What do I have to do to have the two apps available (at the same time) in the emulator? Thank you for helping. Best regards.

    Read the article

  • c# string interning

    - by CodingThunder
    I am trying to understand string interning and why is doesn't seem to work in my example. The point of the example is to show Example 1 uses less (a lot less memory) as it should only have 10 strings in memory. However, in the code below both example use roughly the same amount of memory (virtual size and working set). Please advice why example 1 isn't using a lot less memory? Thanks Example 1: IList<string> list = new List<string>(10000); for (int i = 0; i < 10000; i++) { for (int k = 0; k < 10; k++) { list.Add(string.Intern(k.ToString())); } } Console.WriteLine("intern Done"); Console.ReadLine(); Example 2: IList<string> list = new List<string>(10000); for (int i = 0; i < 10000; i++) { for (int k = 0; k < 10; k++) { list.Add(k.ToString()); } } Console.WriteLine("intern Done"); Console.ReadLine();

    Read the article

  • DotNetNuke and Subversion guidelines

    - by David Stratton
    I've Googled, Binged, and here at StackOverflow, looked through the related questions and searched, but I'm not finding what I'm looking for. I've also searched documentation on DNN. What I'm looking for is any guidance (tutorials, blogs, step-by-step instructions for setting up a repository) etc from people who are experienced in using DotNetNuke with SVN. We use SVN for all our source control, and have no problem with standard applications, because we pretty much built the repository and directory structure to work with our processes. This means when we do web sites, in Visual Studio, we do file based web sites, rather than setting them up in the local IIS. It just makes things easier for us. However, with DNN, it appears that even if you get the source code, it is expecting to be set up in the local IIS, which means additional headaches for us. For example, we are moving all of our source code off our local C drives, and onto a shared drive on a server. This is to enable backups in addition to our normal source control. (This was a management decision). So that means that we need to change the virtual web app when we make the move. Has anyone come up with a good way to work around this? Can DNN be set up so that the developer web server in Visual Studio can be used, so that we can treat it just like any normal web app? Am I missing something obvious? Edit - added I'm willing to accept answers like "We tried it and never got it to work", and "It can't be done" as answers. I'm always open to hearing "It can't be done the way you want. You need to change your procedures to match how it works" if necessary. I guess if you've got experience trying this and just couldn't get it to work, I can learn from your experience that way as well, but some detail would be good.

    Read the article

  • URL equals and checking Internet access

    - by James P.
    On http://java.sun.com/j2se/1.5.0/docs/api/java/net/URL.html it states that: Compares this URL for equality with another object. If the given object is not a URL then this method immediately returns false. Two URL objects are equal if they have the same protocol, reference equivalent hosts, have the same port number on the host, and the same file and fragment of the file. Two hosts are considered equivalent if both host names can be resolved into the same IP addresses; else if either host name can't be resolved, the host names must be equal without regard to case; or both host names equal to null. Since hosts comparison requires name resolution, this operation is a blocking operation. Note: The defined behavior for equals is known to be inconsistent with virtual hosting in HTTP. According to this, equals will only work if name resolution is possible. Since I can't be sure that a computer has internet access at a given time, should I just use Strings to store addresses instead? Also, how do I go about testing if access is available when requested?

    Read the article

  • How to view shell commands used by eclipse "run configurations"

    - by gmale
    Given a "run configuration" in Eclipse, I want to print out the associated shell command that would be used to run it. For example: Right now, in Eclipse, if I click "play" it will run: mvn assembly:directory -Dmaven.test.skip=true I don't see that command, I just know that's what the IDE must run, at some point. However, some of the other run configurations are far more complex with long classpaths and virtual machine options and, frankly, sometimes I have no idea what the equivalent shell command would be (particularly when it comes to Flex). There must be some way to access the shell command that would be associated with a "Run Configuration" in Eclipse/Flex Builder. This information must be available, which leads me to believe someone has written a plugin to display it. Or maybe there's already an option built into Eclipse for accessing this. So is there a way to, essentially, convert an Eclipse run configuration into a shell command? (for context only: I'm asking because I'm writing a bash script that automates everything I do, during development--from populating the Database all the way to opening Firefox and clearing the cache before running the web app. So every command I run from the IDE needs to exist in the script. Some are tricky to figure out.)

    Read the article

  • How to detect changing directory size in Perl

    - by materiamage
    Hello, I am trying to find a way of monitoring directories in Perl, in particular the size of a directory, and upon detecting a change in directory size, perform a particular action. The issue I have is with large files that require a noticeable amount of time to copy into this directory, i.e. 100MB. What happens (in Windows, not Unix) is the system reserves enough disk space for the entire file, even though the file is still copying in progress. This causes problems for me, because my script will try to perform an action on this file that has not finished copying over. I can easily detect directory size changes in Unix via 'du', but 'du' in Windows does not behave the same way. Are there any accurate methods of detecting directory size changes in Perl? Edit: Some points to clarify: - My Perl script is only monitoring a particular directory, and upon detecting a new file or a new directory, perform an action on this new file or directory. It is not copying any files; users on the network will be copying files into the directory I am monitoring. - The problem occurs when a new file or directory appears (copied, not moved) that is significantly large ( 100MB, but usually a couple GB) and my program fires before this copy completes - In Unix I can easily 'du' to see that the file/directory in question is growing in size, and take the appropriate action - In Windows the size is static, so I cannot detect this change - opendir/readdir/closedir is not feasible, as some of the directories that appear may contain thousands of files, and I want to avoid the overhead of Ideally I would like my program to be triggered on change, but I am not sure how to do this. As of right now it busy waits until it detects a change. The change in file/directory size is not in my control.

    Read the article

  • IIS URL Rewrite rule - Default document for subdirectories

    - by Antonio Bakula
    I would like create URL rewrite rule that will set default document for my virtual folders. eg. someting like this www.domain.com/en/ -> www.domain.com/en/index.aspx www.domain.com/hr/ -> www.domain.com/hr/index.aspx www.domain.com/de/ -> www.domain.com/de/index.aspx directories en, hr, de doesn't really exists on web server they are just markers for languange used in site used by home grown http module that will rewrite path with query params. Quick solution was define rule for every single lang, something like this : <rewrite> <rewriteMaps> <rewriteMap name="Langs"> <add key="/en" value="/en/index.aspx" /> <add key="/hr" value="/hr/index.aspx" /> <add key="/de" value="/de/index.aspx" /> </rewriteMap> </rewriteMaps> <rules> But I would really like solution that would not require changes in web.config and adding rewrite rule for every languange used on particular site. Thanks !

    Read the article

  • Why does my javascript file sometimes compressed while sometimes not?(IIS Gzip problem)

    - by Kevin Yang
    i enable gzip for javascript file in my iis settings, here 's the corresponding config section. <httpCompression directory="%SystemDrive%\inetpub\temp\IIS Temporary Compressed Files"> <scheme name="gzip" dll="%Windir%\system32\inetsrv\gzip.dll" staticCompressionLevel="10" dynamicCompressionLevel="8" /> <dynamicTypes> <add mimeType="text/*" enabled="true" /> <add mimeType="message/*" enabled="true" /> <add mimeType="application/soap+msbin1" enabled="true" /> <add mimeType="*/*" enabled="false" /> </dynamicTypes> <staticTypes> <add mimeType="text/*" enabled="true" /> <add mimeType="message/*" enabled="true" /> <add mimeType="application/javascript" enabled="true" /> <add mimeType="application/x-javascript" enabled="true" /> <add mimeType="*/*" enabled="false" /> </staticTypes> </httpCompression> currently, when i download my js file, it seems that sometimes server return the gzip one, and sometimes not. i dont know why, and how to debug that. If a file is already gzipped, it should be cached in local disk, and next time someone visit that file again, iis kernel should return the cache gzip file directly without compressing it again. Is that right?

    Read the article

  • How do I build a class that shares a table with multiple columns in MVC3?

    - by Barrett Kuethen
    I have a Job table public class Job { public int JobId { get; set; } public int SalesManagerId { get; set; } public int SalesRepId { get; set; } } and a Person table public class Person { public int PersonId { get; set; } public int FirstName { get; set; } public int LastName { get; set; } } My question is, how do I link the SalesManagerId to the Person (or PersonId) as well as the SalesRepId to the Person (PersonId)? The Sales Manager and Sales Rep are independent of each other. I just don't want to make 2 different lists to support the Sales Manager and Sales Rep roles. I'm new to MVC3, but it seems public virtual Person Person {get; set; } would be the way to go, but that doesn't work. Any help would be appreciated! Thanks!

    Read the article

  • PHP Socket Server vs node.js: Web Chat

    - by Eliasdx
    I want to program a HTTP WebChat using long-held HTTP requests (Comet), ajax and websockets (depending on the browser used). Userdatabase is in mysql. Chat is written in PHP except maybe the chat stream itself which could also be written in javascript (node.js): I don't want to start a php process per user as there is no good way to send the chat messages between these php childs. So I thought about writing an own socket server in either PHP or node.js which should be able to handle more then 1000 connections (chat users). As a purely web developer (php) I'm not much familiar with sockets as I usually let web server care about connections. The chat messages won't be saved on disk nor in mysql but in RAM as an array or object for best speed. As far as I know there is no way to handle multiple connections at the same time in a single php process (socket server), however you can accept a great amount of socket connections and process them successive in a loop (read and write; incoming message - write to all socket connections). The problem is that there will most-likely be a lag with ~1000 users and mysql operations could slow the whole thing down which will then affect all users. My question is: Can node.js handle a socket server with better performance? Node.js is event-based but I'm not sure if it can process multiple events at the same time (wouldn't that need multi-threading?) or if there is just an event queue. With an event queue it would be just like php: process user after user. I could also spawn a php process per chat room (much less users) but afaik there are singlethreaded IRC servers which are also capable to handle thousands of users. (written in c++ or whatever) so maybe it's also possible in php. I would prefer PHP over Node.js because then the project would be php-only and not a mixture of programming languages. However if Node can process connections simultaneously I'd probably choose it.

    Read the article

  • Web User Control og Event

    - by Mcoroklo
    I have a web user control, CreateQuestion.ascx, in ASP.NET. I want an event "QuestionAdded" to fire when a specific button is clicked. I don't want to send any data, I just want to know WHEN the button is fired. My implementation: CreateQuestion.ascx: public event EventHandler QuestionAdded; protected virtual void OnQuestionAdded(EventArgs e) { if (QuestionAdded != null) QuestionAdded(this, e); } /// This is the button I want to know when is fired protected void SubmitButton_Click(object sender, EventArgs e) { //question.Save(); this.OnQuestionAdded(new EventArgs()); } On the page AnswerQuestion.aspx, I use this: private void SetupControls(int myId) { CreateQuestionControl.QuestionAdded += new EventHandler(QuestionAdded); } private void QuestionAdded(object sender, EventArgs e) { Response.Write("HEJ KARL?"); } My problem No matter what, the event is never fired. I know that both SetupControls() is being run, and the code behind the button which should fire the event is run. When I debug, I can see the event QuestionAdded always are null. How do I make this work? Thanks a lot

    Read the article

  • Alternative or succesor to GDBM

    - by Anon Guy
    We a have a GDBM key-value database as the backend to a load-balanced web-facing application that is in implemented in C++. The data served by the application has grown very large, so our admins have moved the GDBM files from "local" storage (on the webservers, or very close by) to a large, shared, remote, NFS-mounted filesystem. This has affected performance. Our performance tests (in a test environment) show page load times jumping from hundreds of milliseconds (for local disk) to several seconds (over NFS, local network), and sometimes getting as high as 30 seconds. I believe a large part of the problem is that the application makes lots of random reads from the GDBM files, and that these are slow over NFS, and this will be even worse in production (where the front-end and back-end have even more network hardware between them) and as our database gets even bigger. While this is not a critical application, I would like to improve performance, and have some resources available, including the application developer time and Unix admins. My main constraint is time only have the resources for a few weeks. As I see it, my options are: Improve NFS performance by tuning parameters. My instinct is we wont get much out of this, but I have been wrong before, and I don't really know very much about NFS tuning. Move to a different key-value database, such as memcachedb or Tokyo Cabinet. Replace NFS with some other protocol (iSCSI has been mentioned, but i am not familiar with it). How should I approach this problem?

    Read the article

  • Why am I getting a permission denied error on my public folder?

    - by Robin Fisher
    Hi all, This one has got me stumped. I'm deploying a Rails 3 app to Slicehost running Apache 2 and Passenger. My server is running Ruby 1.9.1 using RVM. I am receiving a permission denied error on the "public" folder in my app. My Virtual Host is setup as follows: <VirtualHost *:80> ServerName sharerplane.com ServerAlias www.sharerplane.com ServerAlias *.sharerplane.com DocumentRoot /home/robinjfisher/public_html/sharerplane.com/current/public/ <Directory "/home/robinjfisher/public_html/sharerplane.com/public/"> AllowOverride all Options -MultiViews Order allow,deny Allow from all </Directory> PassengerDefaultUser robinjfisher </VirtualHost> I've tried the following things: trailing slash on public; no trailing slash on public; PassengerUserSwitching on and off; PassengerDefaultUser set and not set; with and without the block. The public folder is owned by robinjfisher:www-data and Passenger is running as robinjfisher so I can't see why there are permission issues. Does anybody have any thoughts? Thanks Robin PS. Have disabled the site for the time being to avoid indexing so what is there currently is not the site in question.

    Read the article

  • Unhandled Exception error message

    - by Joshua Green
    Does anyone know why including a term such as: t = PL_new_term_ref(); would cause an Unhandled Exception error message: 0xC0000005: Access violation reading location 0x0000000c. (Visual Studio 2008) I have a header file: class UserTaskProlog : public ArAction { public: UserTaskProlog( const char* name = " sth " ); ~UserTaskProlog( ); AREXPORT virtual ArActionDesired *fire( ArActionDesired currentDesired ); private: term_t t; }; and a cpp file: UserTaskProlog::UserTaskProlog( const char* name ) : ArAction( name, " sth " ) { char** argv; argv[ 0 ] = "libpl.dll"; PL_initialise( 1, argv ); PlCall( "consult( 'myProg.pl' )" ); } UserTaskProlog::~UserTaskProlog( ) { } ArActionDesired *UserTaskProlog::fire( ArActionDesired currentDesired ) { cout << " something " << endl; t = PL_new_term_ref( ); } Without t=PL_new_term_ref() everything works fine, but when I start adding my Prolog code (declarations first, such as t=PL_new_term_ref), I get this Access Violation error message. I'd appreciate any help. Thanks,

    Read the article

  • handling pointer to member functions within hierachy in C++

    - by anatoli
    Hi, I'm trying to code the following situation: I have a base class providing a framework for handling events. I'm trying to use an array of pointer-to-member-functions for that. It goes as following: class EH { // EventHandler virtual void something(); // just to make sure we get RTTI public: typedef void (EH::*func_t)(); protected: func_t funcs_d[10]; protected: void register_handler(int event_num, func_t f) { funcs_d[event_num] = f; } public: void handle_event(int event_num) { (this->*(funcs_d[event_num]))(); } }; Then the users are supposed to derive other classes from this one and provide handlers: class DEH : public EH { public: typedef void (DEH::*func_t)(); void handle_event_5(); DEH() { func_t f5 = &DEH::handle_event_5; register_handler(5, f5); // doesn't compile ........ } }; This code wouldn't compile, since DEH::func_t cannot be converted to EH::func_t. It makes perfect sense to me. In my case the conversion is safe since the object under this is really DEH. So I'd like to have something like that: void EH::DEH_handle_event_5_wrapper() { DEH *p = dynamic_cast<DEH *>(this); assert(p != NULL); p->handle_event_5(); } and then instead of func_t f5 = &DEH::handle_event_5; register_handler(5, f5); // doesn't compile in DEH::DEH() put register_handler(5, &EH::DEH_handle_event_5_wrapper); So, finally the question (took me long enough...): Is there a way to create those wrappers (like EH::DEH_handle_event_5_wrapper) automatically? Or to do something similar? What other solutions to this situation are out there? Thanks.

    Read the article

  • Serialized NHibernate Configuration objects - detect out of date or rebuild on demand?

    - by fostandy
    I've been using serialized nhibernate configuration objects (also discussed here and here) to speed up my application startup from about 8s to 1s. I also use fluent-nhibernate, so the path is more like ClassMap class definitions in code fluentconfiguration xml nhibernate configuration configuration serialized to disk. The problem from doing this is that one runs the risk of out of date mappings - if I change the mappings but forget to rebuild the serialized configuration, then I end up using the old mappings without realising it. This does not always result in an immediate and obvious error during testing, and several times the misbehaviour has been a real pain to detect and fix. Does anybody have any idea how I would be able to detect if my classmaps have changed, so that I could either issue an immediate warning/error or rebuild it on demand? At the moment I am comparing timestamps on my compiled assembly against the serialized configuration. This will pickup mapping changes, but unfortunately it generates a massive false positive rate as ANY change to the code results in an out of date flag. I can't move the classmaps to another assembly as they are tightly integrated into the business logic. This has been niggling me for a while so I was wondering if anybody had any suggestions?

    Read the article

  • How to install previously-archived apps from xcode organizer to my iphone

    - by Ben Clayton
    Hi all. Xcode keeps an archive of all the versions of my apps that I've submitted to the app store in the 'archived applications' section. I assumed using this I could install an old version of an app to my device, in order to reproduce any problems my client may have had with that particular version. However, when I try to do this I get an error: 'this executable was signed with invalid entitlements, the entitlements specified in your applications code signing entitlements do not match those specified in your provisioning profile' The original app was signed using our App Store distribution certificate, and I use the Organizer interface to re-sign it using our Developer profile. select the archived app select the version I want to test click 'share' select 'iphone developer' next to identity save to disk (saves the ipa file) then copy the ipa to the device using the little + button you see next to 'applications' on the screen you get when you select the connected device. Then I get the error, and the app isn't installed. Is there something obvious I'm doing wrong here? Or is there a different process to re-install an archived app to my device? Thanks,

    Read the article

  • To what extent should code try to explain fatal exceptions?

    - by Andrzej Doyle
    I suspect that all non-trivial software is likely to experience situations where it hits an external problem it cannot work around and thus needs to fail. This might be due to bad configuration, an external server being down, disk full, etc. In these situations, especially if the software is running in non-interactive mode, I expect that all one can really do is log an error and wait for the admin to read the logs and fix the problem. If someone happens to interact with the software in the meantime, e.g. a request comes in to a server that failed to initialize properly, then perhaps an appropriate hint can be given to check the logs and maybe even the error can be echoed (depending on whether you can tell if they're a technical guy as opposed to a business user). For the moment though let's not think too hard about this part. My question is, to what extent should the software be responsible for trying to explain the meaning of the fatal error? In general, how much competence/knowledge are you allowed to presume on administrators of the software, and how much should you include troubleshooting information and potential resolution steps when logging fatal errors? Of course if there's something that's unique to the runtime context this should definitely be logged; but lets assume your software needs to talk to Active Directory via LDAP and gets back an error "[LDAP: error code 49 - 80090308: LdapErr: DSID-0C090334, comment: AcceptSecurityContext error, data 525, vece]". Is it reasonable to assume that the maintainers will be able to Google the error code and work out what it means, or should the software try to parse the error code and log that this is caused by an incorrect user DN in the LDAP config? I don't know if there is a definitive best-practices answer for this, so I'm keen to hear a variety of views.

    Read the article

  • Hide a single content block from search engines?

    - by jonas
    A header is automatically added on top of each content URL, but its not relevant for search and messing up the all the results beeing the first line of every page (in the code its the last line but visually its the first, which google is able to notice) Solution1: You could put the header (content to exculde from google searches) in an iframe with a static url domain.com/header.html and a <meta name="robots" content="noindex" /> ? - are there takeoffs of this solution? Solution2: You could deliver it conditionally by apache mod rewrite, php or javascript -takeoff(?): google does not like it? will google ever try pages with a standard users's useragent and compare? -takeoff: The hidden content will be missing in the google cache version as well... example: add-header.php: <?php $path = $_GET['path']; echo file_get_contents($_SERVER["DOCUMENT_ROOT"].$path); ?> apache virtual host config: RewriteCond %{HTTP_USER_AGENT} !.*spider.* [NC] RewriteCond %{HTTP_USER_AGENT} !Yahoo.* [NC] RewriteCond %{HTTP_USER_AGENT} !Bing.* [NC] RewriteCond %{HTTP_USER_AGENT} !Yandex.* [NC] RewriteCond %{HTTP_USER_AGENT} !Baidu.* [NC] RewriteCond %{HTTP_USER_AGENT} !.*bot.* [NC] RewriteCond %{SCRIPT_FILENAME} \.htm$ [NC,OR] RewriteCond %{SCRIPT_FILENAME} \.html$ [NC,OR] RewriteCond %{SCRIPT_FILENAME} \.php$ [NC] RewriteRule ^(.*)$ /var/www/add-header.php?path=%1 [L]

    Read the article

  • i have problem with sharing a folder through programming using c#?

    - by moon
    here is my code it shares the folder but that does not work correctly when i want to access it , it shows access denied help required, private static void ShareFolder(string FolderPath, string ShareName, string Description) { try { // Create a ManagementClass object ManagementClass managementClass = new ManagementClass("Win32_Share"); // Create ManagementBaseObjects for in and out parameters ManagementBaseObject inParams = managementClass.GetMethodParameters("Create"); ManagementBaseObject outParams; // Set the input parameters inParams["Description"] = Description; inParams["Name"] = ShareName; inParams["Path"] = FolderPath; inParams["Type"] = 0x0; // Disk Drive //Another Type: //DISK_DRIVE = 0x0; //PRINT_QUEUE = 0x1; //DEVICE = 0x2; //IPC = 0x3; //DISK_DRIVE_ADMIN = 0x80000000; //PRINT_QUEUE_ADMIN = 0x80000001; //DEVICE_ADMIN = 0x80000002; //IPC_ADMIN = 0x8000003; //inParams["MaximumAllowed"] = int maxConnectionsNum; // Invoke the method on the ManagementClass object outParams = managementClass.InvokeMethod("Create", inParams, null); // Check to see if the method invocation was successful if ((uint)(outParams.Properties["ReturnValue"].Value) != 0) { throw new Exception("Unable to share directory. Because Directory is already shared or directory not exist"); }//end if }//end try catch (Exception ex) { MessageBox.Show(ex.Message, "error!"); }//end catch }//End Method

    Read the article

  • What happens to existing workspaces after upgrading to TFS 2010

    - by e-mre
    Hi, I was looking for some insight about what happens to existing workspaces and files that are already checked-out on people, after an upgrade to TFS2010. Surprisingly enough I can not find any satisfactory information on this. (I am talking about upgrading on new hardware by the way. Fresh TFS instance, upgraded databases) I've checked TFS Installation guide, I searched through the web, all I could find is upgrade scenarios for the server side. Nobody even mentions what happens to source control clients. I've created a virtual machine to test the upgrade process, The upgrade was successful and all my files and workspaces exist in the new server too. The problem is: The new TFS installation has a new instanceID. When I redirected on the clients to the new server, the client seemed unable to match files and file states in the workspace with the ones on the new server. This makes me wonder if it will be possible to keep working after the production upgrade. As I mentioned above I can not find anything on this, it would be great if anyone could point me to some paper or blog post about this. Thanks in advance...

    Read the article

  • Multi-threaded library calls in ASP.NET page request.

    - by ProfK
    I have an ASP.NET app, very basic, but right now too much code to post if we're lucky and I don't have to. We have a class called ReportGenerator. On a button click, method GenerateReports is called. It makes an async call to InternalGenerateReports using ThreadPool.QueueUserWorkItem and returns, ending the ASP.NET response. It doesn't provide any completion callback or anything. InternalGenerateReports creates and maintains five threads in the threadpool, one report per thread, also using QueueUserWorkItem, by 'creating' five threads, also with and waiting until calls on all of them complete, in a loop. Each thread uses an ASP.NET ReportViewer control to render a report to HTML. That is, for 200 reports, InternalGenerateReports should create 5 threads 40 times. As threads complete, report data is queued, and when all five have completed, report data is flushed to disk. My biggest problems are that after running for just one report, the aspnet process is 'hung', and also that at around 200 reports, the app just hangs. I just simplified this code to run in a single thread, and this works fine. Before we get into details like my code, is there anything obvious in the above scendario that might be wrong?

    Read the article

  • Do you have health checks in your web app or web site?

    - by Pekka
    I have built PHP based "health check" scripts for several projects, but they were always custom-made for the occasion and not written for abstraction as an independent product. I would like to know whether such a solution exists. What I meam by "health check" is a protected web page that functions much like a suite of unit tests, but on a more operational level, showing red/yellow/green statuses for things like Are the cache directories writable? Is the PHP version correct, are required extensions installed? Is the configuration file protected from writing? Is the database server reachable? Do the key tables exist in the database? Is there enough disk space available? Is the site's front page reachable and renders fully ( = no PHP errors)? Do the project's libraries' MD5 checksums match the original ones? Do you do this - or parts of it - in your applications and web sites? Are there any standardized tools for this that bring along all the functionality to perform the tests (ideally as plugins), and just need to be configured accordingly? Is there a way to set this up using one of the Unit Testing frameworks available for PHP (preferably PHPUnit)? If so, do you know any resources / tutorials outlining how?

    Read the article

< Previous Page | 643 644 645 646 647 648 649 650 651 652 653 654  | Next Page >