Search Results

Search found 2353 results on 95 pages for 'browse'.

Page 76/95 | < Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >

  • ASP.NET Authentication Cookie timout and IIS 7 setting

    - by David Laplante
    Hello, I have an ASP.NET website for which i've set the authetication timeout to 60 days so that my users don't have to log in each time they come back if they checked the "remember me" option. Basic ASP.NET login mechanism... It's working fine on my developpement server as well as on the visual studio built-in web server. I can close the browser, wait around 30-40 minutes and browse back to the site and be automatically logged in. However, I've not moved the site to a hosting provider and it seems that whatever I do to my Web.config file, the cookie expires after around 30 minutes (hard to tell the exact amount of time). I have asked the provider's help support and they basically told me: "Web.config file is to configure your website. Please do not change it if you don't know what you are doing" Frustrating answer indeed... To be sure, I checked everywhere on the net for exceptions, fine prints, in the basic asp.net authentication but found none. I have access to IIS remote management for my site (IIS 7) but don't really know where to look. Can there be something in the IIS setting that is overriding my web.config authentication setting? What should I do... Thanks for you help!

    Read the article

  • Save as DialogBox to save textbox content to a newfile using asp.net

    - by user195114
    I want the users to type their text in the given textbox and on clicking on createNewFile Button, a SaveAs Dialogbox should popup and the users should browse through the location and save the file as desired. I have tried some thing but (1) the dialog box goes behind the application (2) when run, dialogbox opens 3 times, means it executes 3 times REPLY TO THE POST protected void btnNewFile_Click(object sender, EventArgs e) { StreamWriter sw = null; try { SaveFileDialog sdlg = new SaveFileDialog(); DialogResult result = sdlg.ShowDialog(); sdlg.InitialDirectory = @"C:\"; sdlg.AddExtension = true; sdlg.CheckPathExists = true; sdlg.CreatePrompt = false; sdlg.OverwritePrompt = true; sdlg.ValidateNames = true; sdlg.ShowHelp = true; sdlg.DefaultExt = "txt"; // sdlg.ShowDialog = Form.ActiveForm; string file = sdlg.FileName.ToString(); string data = txtNewFile.Text; if (sdlg.ShowDialog() == DialogResult.OK) { sw.WriteLine(txtNewFile.Text); sw.Close(); } if (sdlg.ShowDialog() == DialogResult.Cancel) { sw.Dispose(); } //Save(file, data); } catch { } finally { if (sw != null) { sw.Close(); } } } private void Save(string file, string data) { StreamWriter writer = new StreamWriter(file); SaveFileDialog sdlg1 = new SaveFileDialog(); try { if (sdlg1.ShowDialog() == DialogResult.OK) { writer.Write(data); writer.Close(); } else writer.Dispose(); } catch (Exception xp) { MessageBox.Show(xp.Message); } finally { if (writer != null) { writer.Close(); } } } I have tried this.

    Read the article

  • Question about registering COM server and Add Reference to it in a C# project

    - by smwikipedia
    I build a COM server in raw C++, here is the procedure: (1) write an IDL file to define the interface and library. (2) use msidl.exe to compile the IDL file to necessary .h, .c, .tlb files. (3) implement the COM server in C++ and build a .dll file. (4) add the following registry entris: [HKEY_CLASSES_ROOT\RawComCarLib.ComCar.1\CurVer] @="RawComCarLib.ComCar.1" ;CLSID [HKEY_CLASSES_ROOT\CLSID{6CC26343-167B-4CF2-9EDF-99368A62E91C}] @="RawComCarLib.ComCar.1" [HKEY_CLASSES_ROOT\CLSID{6CC26343-167B-4CF2-9EDF-99368A62E91C}\InprocServer32] @="D:\com\Project01.dll" [HKEY_CLASSES_ROOT\CLSID{6CC26343-167B-4CF2-9EDF-99368A62E91C}\ProgID] @="RawComCarLib.ComCar.1" [HKEY_CLASSES_ROOT\CLSID{6CC26343-167B-4CF2-9EDF-99368A62E91C}\TypeLib] @="{E5C0EE8F-8806-4FE3-BC0E-3A56CFB38BEE}" ;TypeLib [HKEY_CLASSES_ROOT\TypeLib{E5C0EE8F-8806-4FE3-BC0E-3A56CFB38BEE}] [HKEY_CLASSES_ROOT\TypeLib{E5C0EE8F-8806-4FE3-BC0E-3A56CFB38BEE}\1.0] @="Car Server Type Lib" [HKEY_CLASSES_ROOT\TypeLib{E5C0EE8F-8806-4FE3-BC0E-3A56CFB38BEE}\1.0\0] [HKEY_CLASSES_ROOT\TypeLib{E5C0EE8F-8806-4FE3-BC0E-3A56CFB38BEE}\1.0\0\win32] @="D:\com\Project01.tlb" [HKEY_CLASSES_ROOT\TypeLib{E5C0EE8F-8806-4FE3-BC0E-3A56CFB38BEE}\1.0\FLAGS] @="0" [HKEY_LOCAL_MACHINE\SOFTWARE\Classes\TypeLib{E5C0EE8F-8806-4FE3-BC0E-3A56CFB38BEE}\1.0\0\win32] @="C:\Windows\System32\msdatsrc.tlb" (5) I try to add reference to the COM by click the Add Reference in the C# project. (6) In the COM tab, I saw my "Car Server Type Lib", it's ok until now. I try to use the Object Browser to browse my COM lib, but the Visual Studio said "the following components could not be browsed", and I noticed that there's no new reference added to the list in the C# project Reference. I can use the tlbimp.exe to generate a interop.CarCom.dll, and then use the COM through this interop dll, but I want this interop assembly to be generated automatically when I just add reference to the COM. Could someone tell me what's wrong? Many thanks.

    Read the article

  • Firefox: Can I use a relative path in the BASE tag?

    - by Aaron Digulla
    I have a little web project where I have many pages and an index/ToC file. The toc file is at the root of my project in toc.html. The pages are spread over a couple of subdirectories and include the toc with an iframe. The project doesn't need a web server, so I can create the HTML in a directory and browse it in my browser. The problem is that I'm running into XSS issues when JavaScript from the toc.html wants to call a function in a page (violation of the same origin policy). So I added base tags in the header with a relative URL to the directory in which toc.html. This works for Konqueror but in Firefox, I have to use absolute paths or the toc won't even display :( Here is an example: <?xml version='1.0' encoding='utf-8' ?> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <base href="../" target="_top" /> <title>Project 1</title> </head> <body> <iframe class="toc" frameborder="0" src="toc.html"> </iframe> </body> </html> This is file is in a subdirectory page. Firefox won't even load it, saying that it can't find page/toc.html. Is there a workaround? I would really like to avoid absolute paths in my export to keep it the same everywhere (locally and when I upload it on the web server later).

    Read the article

  • ASP.NET MVC View Engine Resolution Sequence

    - by intangible02
    I created a simple ASP.NET MVC version 1.0 application. I have a ProductController which has one action Index. In the view, I created a corresponding Index.aspx under Product subfolder. Then I referenced the Spark dll and created Index.spark under the same Product view folder. The Application_Start looks like protected void Application_Start() { RegisterRoutes(RouteTable.Routes); ViewEngines.Engines.Clear(); ViewEngines.Engines.Add(new Spark.Web.Mvc.SparkViewFactory()); ViewEngines.Engines.Add(new WebFormViewEngine()); } My expectation is that since the Spark engine registers before default WebFormViewEngine, when browse the Index action in Product controller, the Spark engine should be used, and WebFormViewEngine should be used for all other urls. However, the test shows that the Index action for Product controller also uses the WebFormViewEngine. If I comment out the registration of WebFormViewEnginer (the last line in the code), I can see that the Index action is rendered by Spark engine and the rest urls generates an error (since the defualt engine is gone), it proves that all my Spark code is correct. Now my question is how the view engine is resolved? Why the registration sequence does not take effect?

    Read the article

  • cfml error with appication.cfc page

    - by tibin mathew
    Hi, I have some problem with my cfml website. I have used the below code in application.cfc file to connect with the dsn. But when ever i put this in my server, i'm getting error. i cant browse even a single test.cfm page. Is there any mistake in that code , any syntax error or something like that, will it be some problem with the dsn <cfset this.name = "0307de6.netsolhost.com"> <cfset this.sessionmanagement = true> <cfset this.loginstorage="session"> <cfset this.sessiontimeout = CreateTimeSpan(0,0,30,0)> <cfset this.applicationtimeout = CreateTimeSpan(2,0,0,0)> <cffunction name="onApplicationStart"> <cfscript> application.DSN = "hirerodsn"; application.dbUserName = "myusr"; application.dbPassword = "myd69!"; </cfscript> </cffunction> <cffunction name="onRequestStart"> <cfscript> request.DSN = "hirerodsn"; request.dbUserName = "myusr"; request.dbPassword = "myd69!"; </cfscript> </cffunction> please anyone help me

    Read the article

  • Relational database data explorer / visualization?

    - by Ian Boyd
    Is there a tool that can let one browse relational data as a graph of connected nodes? For example, i'm faced with trying to cleanse some anomolous data. i can start with two offending rows. In this particular example, the TransactionID should, by business rules, be unique to the table, but i find a transaction that violates that rule: SELECT * FROM LCTTrans WHERE TransactionID = 1075048 LCTID TransactionID ========= ============= 4358 1075048 4359 1075048 2 row(s) affected But really what i want to begin to hunt down all the related data, to try to see which is right. So this hypothetical software would start by showing me these two rows: Next, i want to see that transaction that is linked into this table: Now that transaction points to an MAL, so show me that: Now lets add those two LCTs, that the transaction is "on". A transaction can be on only one LCT, yet this one is pointing to two: Okay computer, both of those LCTs point to an MAL and the transaction that created them, show me those: Those last two transactions, they also point at an MAL, and they themselves point to an LCT, show me those: Okay, now are there any entries in LCTTrans that point to LCTs 4358 or 4359?... And so on, and so on. Now i did all this manually, running single selects, copying and pasting uniqueidentifier keys and converting them into friendly id numbers so i could easily see the relationships. Is there software that can do this?

    Read the article

  • Why would some $_POST variables go missing for a form with PHP?

    - by Chad Johnson
    Sometimes, multiple times a day in fact, users of my web application are submitting a certain form which has about a dozen form fields, half of which are hidden fields, and half of the $_POST data is simply not present in the processing script. Note that the fields that are not present are at the very bottom of the form. I know this because this raises a fatal error, and an email is dispatched to me which includes the post data. And of course, neither I nor any of the developers on my team can reproduce the problem. Flash is involved in the process, as I'm using a library called Uploadify to display a progress meter. Here is the flow...does anyone have ANY ideas at all why some of the post data would be getting wiped out? User visits edit screen for a page in the CMS I am using. Record id for the page is put into a form as a hidden value. User clicks the Uploadify browse button and selects a file (only single file selection is allowed). User clicks Submit button for my form. jQuery intercepts the form submit action, triggers Uploadify to start uploading, and returns false for the submit action (manually cancelling the form submit event so that Uploadify can take over). Uploadify uploads to a custom process script. Uploadify finishes uploading and triggers the Javascript completion callback. The Javascript callback calls $('#myForm').submit() to submit the form. This happens on multiple browsers (Firefox 3.5, 3.6, Safari, Internet Explorer 7, 8) and multiple platforms (Mac OS 10.5, 10.6 and Windows XP, 7).

    Read the article

  • Ruby page loading very very slowly - how should I speed it up?

    - by Elliot
    Hey guys, I'm going to try and describe the code in my view, without actually posting all the garbage: It has a standard shell (header, footer etc. in the layout) this is also where the sub navigation exists which is based on a loop (to find the amount of options) - on this page, we have 6 subnav links. Then in the index view, we have a 3rd level nav - with 3 links that use javascript to link/hide divs on the page. This means each of those original 6 options, all have their own 3'rd level nav, with each of their own 3 div pages. These three pages/divs have the input form for creating a record in rails, and then the other 2 pages show the records in different assortments. ALL of this code lives on one page (aside from the shell). The original sub nav uses a javascript tab solution, to browse through all of it... (this means its about 6 divs, which all contain 4 divs of function - so about 24 heavy divs). Loading it seems to take forever, although after loaded its extremely fast (obviously). My big question, is how should I attack this? I don't know ajax - although I imagine it'd be a good solution for loading the tabs when clicked. Thanks! Elliot

    Read the article

  • Hibernate triggering constraint violations using orphanRemoval

    - by ptomli
    I'm having trouble with a JPA/Hibernate (3.5.3) setup, where I have an entity, an "Account" class, which has a list of child entities, "Contact" instances. I'm trying to be able to add/remove instances of Contact into a List<Contact> property of Account. Adding a new instance into the set and calling saveOrUpdate(account) persists everything lovely. If I then choose to remove the contact from the list and again call saveOrUpdate, the SQL Hibernate seems to produce involves setting the account_id column to null, which violates a database constraint. What am I doing wrong? The code below is clearly a simplified abstract but I think it covers the problem as I'm seeing the same results in different code, which really is about this simple. SQL: CREATE TABLE account ( INT account_id ); CREATE TABLE contact ( INT contact_id, INT account_id REFERENCES account (account_id) ); Java: @Entity class Account { @Id @Column public Long id; @OneToMany(cascade = CascadeType.ALL, orphanRemoval = true) @JoinColumn(name = "account_id") public List<Contact> contacts; } @Entity class Contact { @Id @Column public Long id; @ManyToOne(optional = false) @JoinColumn(name = "account_id", nullable = false) public Account account; } Account account = new Account(); Contact contact = new Contact(); account.contacts.add(contact); saveOrUpdate(account); // some time later, like another servlet request.... account.contacts.remove(contact); saveOrUpdate(account); Result: UPDATE contact SET account_id = null WHERE contact_id = ? Edit #1: It might be that this is actually a bug http://opensource.atlassian.com/projects/hibernate/browse/HHH-5091

    Read the article

  • PHP unit tests for controller returns no errors and no success message.

    - by Mallika Iyer
    I'm using the zend modular director structure, i.e. application modules users controllers . . lessons reports blog I have a unit test for a controller in 'blog' that goes something like the below section of code: I'm definitely doing something very wrong, or missing something - as when i run the test, i get no error, no success message (that goes usually like ...OK (2 tests, 2 assertions)). I get all the text from layout.phtml, where i have the global site layout. This is my first endeavor writing a unittest for zend-M-V-C structure so probably I'm missing something important? Here goes.... require_once '../../../../public/index.php'; require_once '../../../../application/Bootstrap.php'; require_once '../../../../application/modules/blog/controllers/BrowseController.php'; require_once '../../../TestConfiguration.php'; class Blog_BrowseControllerTest extends Zend_Test_PHPUnit_ControllerTestCase { public function setUp() { $this->bootstrap = array($this, 'appBootstrap'); Blog_BrowseController::setUp(); } public function appBootstrap() { require_once dirname(__FILE__) . '/../../bootstrap.php'; } public function testAction() { $this->dispatch('/'); $this->assertController('browse'); $this->assertAction('index'); } public function tearDown() { $this->resetRequest(); $this->resetResponse(); Blog_BrowseController::tearDown(); } }

    Read the article

  • xcode 5.1: libCordova.a architecture problems

    - by inorganik
    Yesterday (3/10/14) when iOS 7.1 was released I also upgraded to Xcode 5.1 and found that my PhoneGap/Cordova project would no longer compile to my iPhone 5s. I also upgraded Cordova to the most recent release: v 3.4.0-0.1.3. I have read many different solutions on SO that relate so changing active architectures and building only active architectures, and none of them work. So here's what I've tried and the errors I get. Initially I got the error: missing required architecture arm64 in file <long file path omitted> libCordova.a Undefined symbols for architecture arm64 So I tried the following. I selected the CordovaLib sub-project in my project, and in both the project and target, I went to Build Settings under Architectures and made sure that arm64 was not included in any of the Debug or Release architectures. At this time Build Active Architecture Only is set to "Yes". That resulted in the following error: file was built for archive which is not the architecture being linked (armv7): <long file path omitted> libCordova.a Undefined symbols for architecture armv7 Setting Build Active Architecture Only to "No", the error again becomes: missing required architecture arm64 in file <long file path omitted> libCordova.a Undefined symbols for architecture arm64 I'm not sure what else to try. The project's architecture settings only includes the key "Base SDK" which is set to iOS 7.1. The project's target does not have architectures settings. Anyway I'm fairly certain the problem lies with the embedded CordovaLib sub-project. What can I do to make this thing compile to my device successfully? Update: same issue on Apache's Jira: https://issues.apache.org/jira/browse/CB-6223

    Read the article

  • flex actionscript not uploading file to PHP page HELP!

    - by Rees
    hello, please help! I am using actionscript 3 with flex sdk 3.5 and PHP to allow a user to upload a file -that is my goal. However, when I check my server directory for the file... NOTHING is there! For some reason SOMETHING is going wrong, even though the actionscript alerts a successful upload (and I have even tried all the event listeners for uploading errors and none are triggered). I have also tested the PHP script and it uploads SUCCESSFULLY when receiving a file from another PHP page (so i'm left to believe there is nothing wrong with my PHP). However, actionscript is NOT giving me any errors when I upload -in fact it gives me a successful event...and I know the my flex application is actually trying to send the data because when I attempt to upload a large file, it takes significantly more time to alert a "successful" event than when I upload a small file. I feel I have debugged every aspect of this code and am now spent. pleaseeee, anyone, can you tell me whats going wrong?? or at least how I can find out whats happening? -I'm using flash bugger and I'm still getting zero errors. private var fileRef:FileReference = new FileReference(); private var flyerrequest:URLRequest = new URLRequest("http://mysite.com/sub/upload_file.php"); private function uploadFile():void{ fileRef.browse(); fileRef.addEventListener(Event.SELECT, selectHandler); fileRef.addEventListener(Event.COMPLETE, completeHandler); } private function selectHandler(event:Event):void{ fileRef.upload(flyerrequest); } private function completeHandler(event:Event):void{ Alert.show("uploaded"); }

    Read the article

  • ASP.NET MVC on Cassini: How can I force the "content" directory to return 304s instead of 200s?

    - by Portman
    Scenario: I have an ASP.NET MVC application developed in Visual Studio 2008. There is a root folder named "Content" that stores images and stylesheets. When I run locally (using Cassini) and browse my application, every resource from the "Content" directory is always downloaded. Using Firebug, I can verify that the web server returns an HTTP 200 ("ok"). Desired: I would like for Cassini to return HTTP 304 ("not modified") instead of 200. This is the behavior when running the site under IIS7. Reasoning: The site I am working on has a large number of static resources (often as many as 40 per page). Browsing the site is very fast on IIS7, because these resources are (correctly) cached by the browser. However, browsing the site on my local machine is painfully slow. Pages that render in under 1 second on IIS7 take over 30 seconds to render on Cassini. It's actually faster for me to upload the entire website every few minutes and test from there. (Yes, I recognize that this is perverse and crazy.) So: how can I instruct/trick Cassini into treating the "Content" directory like IIS7 does?

    Read the article

  • git pull crashes after other member push something

    - by naiad
    Here it's the story... we have a Github account. I clone the repository ... then I can work with it, commit things, push things, etc ... I use Linux with command line and git version 1.7.7.3 Then other user, using Eclipse and git plugin for eclipse eGit 1.1.0 pushes something, and it appears in the github web pages as the last commit, but when I try to pull: $ git pull remote: Counting objects: 13, done. remote: Compressing objects: 100% (6/6), done. remote: Total 9 (delta 2), reused 7 (delta 0) Unpacking objects: 100% (9/9), done. error: unable to find 3e6c5386cab0c605877f296642d5183f582964b6 fatal: object 3e6c5386cab0c605877f296642d5183f582964b6 not found "3e6c5386cab0c605877f296642d5183f582964b6" is the commit hash of the last commit, done by the other user ... there's no problem at all to browse it through web page ... but for me it's impossible to pull it. It's strange, because my command line output tells about that commit hash, so it knows that one is the last one commit in the github system, but my git can not pull it ! Maybe the git protocol used in eGit is incompatible with the console git 1.7.7.3...

    Read the article

  • Visual Studio 2008: Can't connect to known good TFS 2010 beta 2

    - by p.campbell
    A freshly installed TFS 2010 Beta 2 is at http://serverX:8080/tfs. A Windows 7 developer machine with VS 2008 Pro SP1 and the VS2008 Team Explorer (no SP). The TFS 2008 Service Pack 1 didn't work for me - "None of the products that are addressed by this software update are installed on this computer." The developer machine is able to browse the TFS site at the above URL. The Issue is around trying to add the TFS server into the Team Explorer window in Visual Studio 2008. Here's a screenshot showing the error: unable to connect to this Team Foundation Server. Possible reasons for failure include: The Team Foundation Server name, port number or protocol is incorrect. The Team Foundation Server is offline. Password is expired or incorrect. The TFS server is up and running properly. Firewall ports are open, and is accessible via the browser on the dev machine!! larger image Question: how can you connect from VS 2008 Pro to a TFS 2010 Beta 2 server? Resolution Here's how I solved this problem: installed VS 2008 Team Explorer as above. re-install VS 2008 Service Pack 1 when adding a TFS server to Team Explorer, you MUST specify the URL as such: http://[tfsserver]:[port]/[vdir]/[projectCollection] in my case above, it was http://serverX:8080/tfs/AppDev-TestProject you cannot simply add the TFS server name and have VS look for all Project Collections on the server. TFS 2010 has a new URL (by default) and VS 2008 doesn't recognize how to gather that list.

    Read the article

  • Unregistering COM dll with a C# Setup Project

    - by lb
    Hi All. I've been stuck on this one for a while. I'll try explain in the simplest terms and at the best of my knowledge. I will honour any help. I've got a C# project which uses a VB6 compiled ActiveX DLL that I'm constantly updating. I compile the setup project, send it to the client and they run the setup. When building the updated setup project, I would increase the 'Version' of the setup project so it wouldn't bother with 'Another version is already installed'. What started happening after a few updates I began to notice the DLL would not be updated to the new version in the installer. The client computer had the original DLL both installed and registered. First symptom: method not found exceptions from the client C# code. This is not a shared DLL and only this application needs it. I've noticed that when uninstalling the application (through the usual procedure) the DLL is also not removed from the application folder although I would set this file's property 'Permanent' to false. The registration entries in the registry are mantained also. I do update in VS6.0 the version of the DLL (usually increase the build number) before building it. Then in VS2008, I remove it from the References, and add it again from the 'Browse tab', without re-registering it on my dev machine and adding it from the COM tab. I've thought of these options. Custom step in Setup project to regsvr32.exe /u 'hardcoded path of my dll' at uninstall (ugly) Somehow find out how the 'Isolate' property can work for me without registering Find out how to execute setup project 'Conditions' that would actually check the version of the library and to update the file accordingly at every install) Any help would be incredibly welcome.

    Read the article

  • kick off a map reduce job from my java/mysql webapp

    - by Brian
    Hi guys, I need a bit of archecture advice. I have a java based webapp, with a JPA based ORM backed onto a mysql relational database. Now, as part of the application I have a batch job that compares thousands of database records with each other. This job has become too time consuming and needs to be parallelized. I'm looking at using mapreduce and hadoop in order to do this. However, I'm not too sure about how to integrate this into my current architecture. I think the easiest initial solution is to find a way to push data from mysql into hadoop jobs. I have done some initial research on this and found the following relevant information and possibilities: 1) https://issues.apache.org/jira/browse/HADOOP-2536 this gives an interesting overview of some inbuilt JDBC support 2) This article http://architects.dzone.com/articles/tools-moving-sql-database describes some third party tools to move data from mysql to hadoop. To be honest I'm just starting out with learning about hbase and hadoop but I really don't know how to integrate this into my webapp. Any advice is greatly appreciated. cheers, Brian

    Read the article

  • Why can't perfmon see instances of my custom performance counter?

    - by spoulson
    I'm creating some custom performance counters for an application. I wrote a simple C# tool to create the categories and counters. For example, the code snippet below is basically what I'm running. Then, I run a separate app that endlessly refreshes the raw value of the counter. While that runs, the counter and dummy instance are seen locally in perfmon. The problem I'm having is that the monitoring system we use can't see the instances in the multi-instance counter I've created when viewing remotely from another server. When using perfmon to browse the counters, I can see the category and counters, but the instances box is grayed out and I can't even select "All instances", nor can I click "Add". Using other access methods, like [typeperf][1] exhibit similar issues. I'm not sure if this is a server or code issue. This is only reproducible in the production environment where I need it. On my desktop and development servers, it works great. I'm a local admin on all servers. CounterCreationDataCollection collection = new CounterCreationDataCollection(); var category_name = "My Application"; var counter_name = "My counter name"; CounterCreationData ccd = new CounterCreationData(); ccd.CounterType = PerformanceCounterType.RateOfCountsPerSecond64; ccd.CounterName = counter_name; ccd.CounterHelp = counter_name; collection.Add(ccd); PerformanceCounterCategory.Create(category_name, category_name, PerformanceCounterCategoryType.MultiInstance, collection); Then, in a separate app, I run this to generate dummy instance data: var pc = new PerformanceCounter(category_name, counter_name, instance_name, false); while (true) { pc.RawValue = 0; Thread.Sleep(1000); }

    Read the article

  • Problems using Maven to initialize a local thoughtsite (App Engine sample) project in Eclipse

    - by ovr
    This sample app ("thoughtsite") for App Engine contains a pom.xml in its trunk: http://code.google.com/p/thoughtsite/source/browse/#svn/trunk I ran mvn eclipse:eclipse and also tried using m2eclipse to import this source code into an Eclipse project. But I end up with this error despite the fact that I have the Google App Engine plugin and the Google App Engine SDK installed: Exception in thread "main" java.lang.ExceptionInInitializerError at com.google.appengine.tools.info.SdkImplInfo.<clinit>(SdkImplInfo.java:19) at com.google.appengine.tools.util.Logging.initializeLogging(Logging.java:36) at com.google.appengine.tools.development.DevAppServerMain.main(DevAppServerMain.java:82) Caused by: java.lang.RuntimeException: Unable to discover the Google App Engine SDK root. This code should be loaded from the SDK directory, but was instead loaded from file:~/.m2/repository/com/google/appengine/appengine-tools-sdk/1.3.0/appengine-tools-sdk-1.3.0.jar. Specify -Dappengine.sdk.root to override the SDK location. at com.google.appengine.tools.info.SdkInfo.findSdkRoot(SdkInfo.java:106) at com.google.appengine.tools.info.SdkInfo.<clinit>(SdkInfo.java:24) ... 3 more When I go into the project settings under "Google" and try to set it to use the default App Engine SDK it always reverts to trying to use Maven's App Engine SDK instead. No idea how to get this project working.

    Read the article

  • Which Project Management Software is adequate for Software & Non-Software Projects?

    - by cusack
    PMS = (Project Management Software) I used trac for software development some time ago. Right now I'm searching for a new more powerful (scheduling, gantt charts, ...) free solution (as in free beer ;-) and free to install on my server) for my current software project. Besides the current software project, abstract project management features like issue-tracking & scheduling would be great for coordinating a group of volunteers for real-life projects as well. I would want one solution for both purposes, so that I have the hassle of installation, getting used to the system and administration only once. So I tried redmine but the problem is it seems to be designed for software projects only. I can't suggest such a solution for the volunteer-group if tickets/issues would have to be of type bug, feature, ... I shortlisted the following six PMS from the wikipedia comparison http://en.wikipedia.org/wiki/List_of_project_management_software Project.net Project-Open Redmine Trac Endeavour Software Project Management eGroupWare I guess they are all more or less fine for software development but would you consider any of these to be good for the non-software project as well? Cliff Notes: I would want a start page situation like in trac. The start-page is a wiki presenting the project and not the PMS. But you can log into the PMS from there. Feature-wish list: wiki, Issue tracking, revision control, scheduling & gantt charts, forums (least important) (Btw: I'm very aware that I can't expect everything to be perfect ;-) 1.)Do you know a suitable solution for software and real-life projects or a highly customizable PMS where I can easily remove sth. like "browse source"(trac) and rename things like ticket/issue-types "bug", "feature"? 2.)Any experience good/bad with the above mentioned six PMS? I would personally guess that "Redmine" and "Endeavour Software Project Management" are too focused on software projects.

    Read the article

  • Broken ssl, what to do

    - by TIT
    I have a site and i implemented ssl there. but when i browse it, the security seals dont come. i asked to godaddy, they replaid: Thank you for contacting online support. I cannot replicate the issue you have described. The error you described is caused by the way your site has been designed. If you receive this error, you have a combination of secure and non-secure objects on the page. For example, if your secure website was https://www.domain.tld and you added an object (an image, script, flash file, etc.) to that page that was located at http://www.domain.tld/image.jpg, you would break the seal. You will need to change your design to link to objects using https (ie https://www.domain.tld/image.jpg) or modify your site design to use relative paths (/image.jpg). This error can only be corrected by modifying your site design. Please contact your web designer or the manufacturer of your web design software if you require additional assistance modifying your site design. but the problem is i made everything,all my images javascripts are unders https, but the seal still not coming, saying: some content insecure. what is the problem.

    Read the article

  • Test multiple domains using ASP.NET development server

    - by Pete Lunenfeld
    I am developing a single web application that will dynamically change its content depending on which domain name is used to reach the site. Multiple domains will point to the same application. I wish to use the following code (or something close) to detect the domain name and perform the customizations: string theDomainName = Request.Url.Host; switch (theDomainName) { case "www.clientone.com": // do stuff break; case "www.clienttwo.com": // do other stuff break; } I would like to test the functionality of the above using the ASP.NET development server. I created mappings in the local HOSTS file to map www.clientone.com to 127.0.0.1, and www.clienttwo.com to 127.0.0.1. I then browse to the application with the browser using www.clinetone.com (etc). When I try to test this code using the ASP.net development server the URL always says localhost. It does NOT capture the host entered in the browser, only localhost. Is there a way to test the URL detection functionality using the development server? Thanks.

    Read the article

  • Publishing artifacts with sources on archiva

    - by Palimondo
    At work I'm dipping my toes in managing project dependencies with maven. We use Apache Archiva (1.2.1) as a local repository and proxy. I'm adding artifact for open source project, that is not published on any public repository. I've learned that to publish the sources I should use the Classifier field on Upload artifact page. The sources are then listed alongside the jar and pom when I browse the repository. But when I update my maven dependencies I get only the jar and pom from the repository. I noticed that sources are also missing when the archiva proxies for me the downloads from other public repositories. I didn't find any configuration options in Archiva's admin pages to serve the sources... What am I missing? Update: I was missing the fact that artifact sources have to be downloaded manually. I.e. the maven client has to request them, which is controlled by command line option -DdownloadSources=true. Maven Integration for Eclipse has a preference setting to always download them as described in Resolving artifact sources. Archiva then serves the sources for local artifacts or proxies the request to remote repositories and caches the sources for future requests.

    Read the article

  • Translating CURL to FLEX HTTPRequests

    - by Joshua
    I am trying to convert from some CURL code to FLEX/ActionScript. Since I am 100% ignorant about CURL and 50% ignorant about Flex and 90% ignorant on HTTP in general... I'm having some significant difficulty. The following CURL code is from http://code.google.com/p/ga-api-http-samples/source/browse/trunk/src/v2/accountFeed.sh I have every reason to believe that it's working correctly. USER_EMAIL="[email protected]" #Insert your Google Account email here USER_PASS="secretpass" #Insert your password here googleAuth="$(curl https://www.google.com/accounts/ClientLogin -s \ -d Email=$USER_EMAIL \ -d Passwd=$USER_PASS \ -d accountType=GOOGLE \ -d source=curl-accountFeed-v2 \ -d service=analytics \ | awk /Auth=.*/)" feedUri="https://www.google.com/analytics/feeds/accounts/default\ ?prettyprint=true" curl $feedUri --silent \ --header "Authorization: GoogleLogin $googleAuth" \ --header "GData-Version: 2" The following is my abortive attempt to translate the above CURL to AS3 var request:URLRequest=new URLRequest("https://www.google.com/analytics/feeds/accounts/default"); request.method=URLRequestMethod.POST; var GoogleAuth:String="$(curl https://www.google.com/accounts/ClientLogin -s " + "-d [email protected] " + "-d Passwd=secretpass " + "-d accountType=GOOGLE " + "-d source=curl-accountFeed-v2" + "-d service=analytics " + "| awk /Auth=.*/)"; request.requestHeaders.push(new URLRequestHeader("Authorization", "GoogleLogin " + GoogleAuth)); request.requestHeaders.push(new URLRequestHeader("GData-Version", "2")); var loader:URLLoader=new URLLoader(); loader.dataFormat=URLLoaderDataFormat.BINARY; loader.addEventListener(Event.COMPLETE, GACompleteHandler); loader.addEventListener(IOErrorEvent.IO_ERROR, GAErrorHandler); loader.addEventListener(SecurityErrorEvent.SECURITY_ERROR, GAErrorHandler); loader.load(request); This probably provides you all with a good laugh, and that's okay, but if you can find any pity on me, please let me know what I'm missing. I readily admit functional ineptitude, therefore letting me know how stupid I am is optional.

    Read the article

< Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >