Search Results

Search found 11568 results on 463 pages for 'config spec'.

Page 384/463 | < Previous Page | 380 381 382 383 384 385 386 387 388 389 390 391  | Next Page >

  • C++ thread safety - exchange data between worker and controller

    - by peterchen
    I still feel a bit unsafe about the topic and hope you folks can help me - For passing data (configuration or results) between a worker thread polling something and a controlling thread interested in the most recent data, I've ended up using more or less the following pattern repeatedly: Mutex m; tData * stage; // temporary, accessed concurrently // send data, gives up ownership, receives old stage if any tData * Send(tData * newData) { ScopedLock lock(m); swap(newData, stage); return newData; } // receiving thread fetches latest data here tData * Fetch(tData * prev) { ScopedLock lock(m); if (stage != 0) { // ... release prev prev = stage; stage = 0; } return prev; // now current } Note: This is not supposed to be a full producer-consumer queue, only the msot recent data is relevant. Also, I've skimmed ressource management somewhat here. When necessary I'm using two such stages: one to send config changes to the worker, and for sending back results. Now, my questions assuming that ScopedLock implements a full memory barrier: do stage and/or workerData need to be volatile? is volatile necessary for tData members? can I use smart pointers instead of the raw pointers - say boost::shared_ptr? Anything else that can go wrong? I am basically trying to avoid "volatile infection" spreading into tData, and minimize lock contention (a lock free implementation seems possible, too). However, I'm not sure if this is the easiest solution. ScopedLock acts as a full memory barrier. Since all this is more or less platform dependent, let's say Visual C++ x86 or x64, though differences/notes for other platforms are welcome, too. (a prelimenary "thanks but" for recommending libraries such as Intel TBB - I am trying to understand the platform issues here)

    Read the article

  • Thinking sphinx: Problems with polymorphic associations

    - by auralbee
    Hello, I recently switched from ultrasphinx to thinking_sphinx for full-text search. I am trying to figure out how to index fields of polymorphic associations. I found some information but I still have some problems, although it seems to be easy. Here is my setup: class Info < ActiveRecord::Base belongs_to :mappable, :polymorphic => true define_index indexes mappable_type indexes mappable(:name), :as => :mappable_name end class A < ActiveRecord::Base has_many :infos, :as => :mappable end class B < ActiveRecord::Base has_many :infos, :as => :mappable end Amongst others, I want to do a search in the name column of A and B (both classes have this column), so I added the field to my index. When I do rake thinking_sphinx:index I get the following error: Generating Configuration to .../config/development.sphinx.conf rake aborted! undefined method `connection' for Object:Class .../.gem/ruby/1.8/gems/thinking-sphinx-1.3.16/lib/thinking_sphinx/ association.rb:149:in `casted_options' Any idea? Am I missing something? Thanks in advance. Tobi

    Read the article

  • WCF Streaming not working at server

    - by Radhi
    hi, i have used WCF service to transfer large files in chunks to the server for that i have reference this article http://kjellsj.blogspot.com/2007/02/wcf-streaming-upload-files-over-http.html i have configured my application on IIS on my machine. its work fine here. it allows upto 64mb file upload but when we have published the site. it allows only maximum 30Mb file if i try to upload more than that i got error 404 - resource not found. here is the binding config i have used. <basicHttpBinding> <!-- buffer: 64KB; max size: 64MB --> <binding name="FileTransferServicesBinding" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00" transferMode="Streamed" messageEncoding="Mtom" maxBufferSize="65536" maxReceivedMessageSize="67108864"> <security mode="None"> <transport clientCredentialType="None"/> </security> </binding> </basicHttpBinding> please suggest me where i am missing anything. and if required more code please let me know -thanks in advance

    Read the article

  • import txt files using excel interop in C# (QueryTables.Add)

    - by kite
    Hi all, I am trying to insert text files into excel cell using Querytables.Add; no error, but the worksheet is empty. except for the single cell manipulation using Value2 property. I already using macro to record the object used. Can you help me on this(I am using vs2008, C# , excel 2003 and 2007; both shown empty cell). Below is my code; thanks for your help Application application = new ApplicationClass(); try { object misValue = Missing.Value; wbDoc = application.Workbooks.Open(flnmDoc, misValue, misValue, misValue, misValue, misValue, misValue, misValue, misValue, misValue, misValue, misValue, misValue, misValue, misValue); wsRefDocBudgetOwner = (Worksheet)wbDoc.Worksheets[2]; Range lRange = wsRefDocBudgetOwner.get_Range("B2", "B25"); var temp2 = wsRefDocBudgetOwner.QueryTables; var temp = temp2.Add(@"TEXT;d:\temp\config ssas.txt", lRange, Type.Missing); //temp.RefreshStyle = XlCellInsertionMode.xlInsertDeleteCells; //temp.RefreshOnFileOpen = true; wsRefDocBudgetOwner.get_Range("B1", "B1").Value2 = "Lgfdgast adsffdafadfads"; wbDoc.Save(); //wbDoc.SaveAs(flnmDoc2, misValue, misValue, misValue, misValue, misValue, XlSaveAsAccessMode.xlExclusive, // misValue, misValue, misValue, misValue, misValue); wbDoc.Close(Missing.Value, Missing.Value, Missing.Value); } finally { application.Quit(); }

    Read the article

  • How to get aspnet_Users.UserId for an anonymous user in ASP.NET membership ?

    - by Simon_Weaver
    I am trying to get the aspnet membership UserId field from an anonymous user. I have enabled anonymous identification in web.config : <anonymousIdentification enabled="true" /> I have also created a profile: <profile> <providers> <clear /> <add name="AspNetSqlProfileProvider" type="System.Web.Profile.SqlProfileProvider, System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" connectionStringName="ApplicationServices" applicationName="/" /> </providers> <properties> <add name="Email" type="System.String" allowAnonymous="true"/> <add name="SomethingElse" type="System.Int32" allowAnonymous="true"/> </properties> </profile> I see data in my aspnetdb\aspnet_Users table for anonymous users that have had profile information set. Userid PropertyNames PropertyValuesString 36139818-4245-45dd-99cb-2a721d43f9c5 Email:S:0:17: [email protected] I just need to find how to get the 'Userid' value. It is not possible to use : Membership.Provider.GetUser(Request.AnonymousID, false); The Request.AnonymousID is a different GUID and not equal to 36139818-4245-45dd-99cb-2a721d43f9c5. Is there any way to get this Userid. I want to associate it with incomplete activity for an anonymous user. Using the primary key of aspnet_Users is preferable to having to create my own GUID (which I could do and store in the profile). This is basically a dupe of this question but the question was never actually answered.

    Read the article

  • IIS7 ASP.NET Session drops in seconds

    - by shxo
    For testing I have 1 isolated page - no masters, controls, …. My sessions are lost after about 30 seconds. I’ve tried setting timeout on the page itself, in web.config, both, and neither. Tried forms authentication with timeout and windows authentication. Recycle the AppPool after changes. I can response.write from the Session_Start , but I never get any response.writes from the Session_End. Some things I’ve tried: <sessionState mode="InProc" stateConnectionString="tcpip=127.0.0.1:42424" sqlConnectionString="data source=127.0.0.1;" cookieless="false" timeout="20" /> <sessionState mode="InProc" cookieless="false" timeout="20"/> <sessionState mode="InProc" timeout="20"/> <sessionState timeout="20"/> No luck. My runtime is set to: <httpRuntime useFullyQualifiedRedirectUrl="true" maxRequestLength="204800" requestLengthDiskThreshold="204800" executionTimeout="600" /> I don’t know what this would be relevant, but I can’t think of anything else to post! Thanks!

    Read the article

  • How to consume webservices over https

    - by Kumar
    I am trying to consume a webservices which are located at https://TestServices/ServiceList.asmx. When I try to add a service reference to my C# library class project my app.config file looks like below: <system.serviceModel> <bindings> <basicHttpBinding> <binding name="TestServicesSoap" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00" allowCookies="false" bypassProxyOnLocal="false" hostNameComparisonMode="StrongWildcard" maxBufferSize="50000000" maxBufferPoolSize="524288" maxReceivedMessageSize="50000000" messageEncoding="Text" textEncoding="utf-8" transferMode="Buffered" useDefaultWebProxy="true"> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384" /> <security mode="None"> <transport clientCredentialType="None" proxyCredentialType="None" realm=""> </transport> <message clientCredentialType="UserName" algorithmSuite="Default" /> </security> </binding> </basicHttpBinding> </bindings> <client> <endpoint address="http://TestServices/ServiceList.asmx" binding="basicHttpBinding" bindingConfiguration="TestServicesSoap" contract="TestServices.TestServicesSoap" name="TestServicesSoap" /> </client> </system.serviceModel> Even when I tried to add a service reference to the https://TestServices/ServiceList.asmx for some reason end point address is still pointing to http://TestServices/ServiceList.asmx. I tried changing the http to https but I am getting the below error: The provided URI scheme 'https' is invalid; expected 'http'. Parameter name: via What is the right way of consuming the webservices on https?

    Read the article

  • Passenger: "Missing these required gems redgreen"

    - by Michael Stum
    Hello, total ruby newbie, trying to setup a Rails/MongoDB application on Mac OS X Snow leopard. Installed Ruby 1.9.1 and RubyGems 1.3.7, which ruby and which gem point to the same directory. I'm using the Snow Leopard built-in apache and Passenger 2.2.11. I'm using the rails template from the mongo-site which seems to work okay overall. The exact error that passenger gives me is: /Users/User/Sites/feuerapp/vendor/rails/railties/lib/rails/gem_dependency.rb:119:Warning: Gem::Dependency#version_requirements is deprecated and will be removed on or after August 2010. Use #requirement **Notice: C extension not loaded. This is required for optimum MongoDB Ruby driver performance. You can install the extension as follows: gem install bson_ext If you continue to receive this message after installing, make sure that the bson_ext gem is in your load path and that the bson_ext and mongo gems are of the same version. Missing these required gems: redgreen You're running: ruby 1.9.1.376 at /usr/local/bin/ruby rubygems 1.3.7 at /Users/User/.gem/ruby/1.9.1, /usr/local/lib/ruby/gems/1.9.1 Runrake gems:installto install the missing gems. The weird thing is that redgreen is installed and looks fine to me: Dahlia:feuerapp User$ ls -la vendor/gems/ total 0 drwxr-xr-x 7 User staff 238 May 18 22:56 . drwxr-xr-x 5 User staff 170 May 18 23:00 .. drwxr-xr-x 11 User staff 374 May 18 22:56 factory_girl-1.2.4 drwxr-xr-x 11 User staff 374 May 18 22:56 mocha-0.9.8 drwxr-xr-x 7 User staff 238 May 18 22:56 mongo_mapper-0.7.6 drwxr-xr-x 7 User staff 238 May 18 22:56 redgreen-1.2.2 drwxr-xr-x 11 User staff 374 May 18 22:56 shoulda-2.10.3 Commenting out this line in environment.rb "solves" the issue, but that's not really want I want: config.gem 'redgreen' I don't understand anything of gems yet, but from my limited understanding, redgreen should be there and found?

    Read the article

  • Debugging SQL Server Slowness: Same Database, Different Servers

    - by Craig Walker
    For a while now we've been having anecdotal slowness on our newly-minted (VMWare-based) SQL Server 2005 database servers. Recently the problem has come to a head and I've started looking for the root cause of the issue. Here's the weird part: on the stored procedure that I'm using as a performance test case, I get a 30x difference in the execution speed depending on which DB server I run it on. This is using the same database (mdf) and log (ldf) files, detached, copied, and reattached from the slow server to the fast one. This doesn't appear to be a (virtualized) hardware issue: he slow server has 4x the CPU capacity and 2x the memory as the fast one. As best as I can tell, the problem lies in the environment/configuration of the servers (either operating system or SQL Server installation). However, I've checked a bunch of variables (SQL Server config options, running services, disk fragmentation) and found nothing that has made a difference in testing. What things should I be looking at? What tools can I use to investigate why this is happening?

    Read the article

  • cache_money only writing to memcached on creates and updates, and seemingly never looking in the cac

    - by Shane Liebling
    I seem to be having some extremely odd cache_money interactions. When I am on the console, and I create a new instance of a class and save it I see the cache misses and cache stores on my memcached console output. Then when the create finishes I see a bunch of cache deletions. If I then try to do any kind of find for the newly created object (or any other objects for that matter) I never see any cache access. This is highly confusing. I could kind of understand if all finds never hit the cache (though that in and of itself would be an issue requiring investigation), but finds do seem to hit the cache when the object is being created (checking for associations and such). Anyone have this experience in the past at all? Any thoughts? AFAIK there isn't really much in the way of configuration options for cache_money, and it certainly doesn't seem like there are any that would be on by default and be creating these kinds of symptoms. My cache_money config is basically straight out of the docs. Any help would be greatly appreciated.

    Read the article

  • Uploadify and rails 3 authenticity tokens

    - by Ceilingfish
    Hi chaps, I'm trying to get a file upload progress bar working in a rails 3 app using uploadify (http://www.uploadify.com) and I'm stuck at authenticity tokens. My current uploadify config looks like <script type="text/javascript" charset="utf-8"> $(document).ready(function() { $("#zip_input").uploadify({ 'uploader': '/flash/uploadify.swf', 'script': $("#upload").attr('action'), 'scriptData': { 'format': 'json', 'authenticity_token': encodeURIComponent('<%= form_authenticity_token if protect_against_forgery? %>') }, 'fileDataName': "world[zip]", //'scriptAccess': 'always', // Incomment this, if for some reason it doesn't work 'auto': true, 'fileDesc': 'Zip files only', 'fileExt': '*.zip', 'width': 120, 'height': 24, 'cancelImg': '/images/cancel.png', 'onComplete': function(event, data) { $.getScript(location.href) }, // We assume that we can refresh the list by doing a js get on the current page 'displayData': 'speed' }); }); </script> But I am getting this response from rails: Started POST "/worlds" for 127.0.0.1 at 2010-04-22 12:39:44 ActionController::InvalidAuthenticityToken (ActionController::InvalidAuthenticityToken): Rendered /opt/local/lib/ruby/gems/1.8/gems/actionpack-3.0.0.beta3/lib/action_dispatch/middleware/templates/rescues/_trace.erb (1.0ms) Rendered /opt/local/lib/ruby/gems/1.8/gems/actionpack-3.0.0.beta3/lib/action_dispatch/middleware/templates/rescues/_request_and_response.erb (6.6ms) Rendered /opt/local/lib/ruby/gems/1.8/gems/actionpack-3.0.0.beta3/lib/action_dispatch/middleware/templates/rescues/diagnostics.erb within rescues/layout (12.2ms) This appears to be because I'm not sending the authentication cookie along with the request. Does anyone know how I can get the values I should be sending there, and how I can make rails read it from HTTP POST rather than trying to find it as a cookie?

    Read the article

  • Diffie-Hellman in Silverlight

    - by cmaduro
    I am trying to devise a security scheme for encrypting the application level data between a silverlight client, and a php webservice that I created. Since I am dealing with a public website the information I am pulling from the service is public, but the information I'm submitting to the webservice is not public. There is also a back end to the website for administration, so naturally all application data being pushed and pulled from the webservice to the silverlight administration back end must also be encrypted. Silverlight does not support asymmetric encryption, which would work for the public website. Symmetric encryption would only work on the back end because users do not log in to the public website, so no password based keys could be derived. Still symmetric encryption would be great, but I cannot securely save the private key in the silverlight client. Because it would either have to be hardcoded or read from some kind of config file. None of that is considered secure. So... plan B. My final alternative would be then to implement the Diffie-Hellman algorithm, which supports symmetric encryption by means of key agreement. However Diffie-Hellman is vulnerable to man-in-the-middle attacks. In other words, there is no guarantee that either side is sure of each others identity, making it possible for communication to be intercepted and altered without the receiving party knowing about it. It is thus recommended to use a private shared key to encrypt the key agreement handshaking, so that the identity of either party is confirmed. This brings me back to my initial problem that resulted in me needing to use Diffie-Hellman, how can I use a private key in a silverlight client without hardcoding it either in the code or an xml file. I'm all out of love on this one... is there any answer to this?

    Read the article

  • Silverlight, WCF service, integrated security AND ssl/https not possible?

    - by Flores
    I have this setup that works perfectly when using http. A silverlight 3 client .net 4 WCF service hosted in IIS with basicHttpBinding and using integrated security on the site When setting https to required on the website the setup stops working. Using the wcftestclient on the uri I get the message: The HTTP request is unauthorized with client authentication scheme 'Anonymous'. The authentication header received from the server was 'Negotiate,NTLM'. The remote server returned an error: (401) Unauthorized. Maybe this makes sense because the wcftestclient does not pass credentials? in the web.config the security mode for the service binding is set is set to 'Transport'. The silverlight client is created like this: BasicHttpBinding basicHttpBinding = new BasicHttpBinding(); basicHttpBinding.Security.Mode = BasicHttpSecurityMode.Transport; var serviceClient = new ImportServiceClient(basicHttpBinding, serviceAddress); The service address is ofcourse starting with https:// And the silverlight client reports this error: The provided URI scheme 'https' is invalid; expected 'http'. Parameter name: via Remember, swithing it back to http (and setting security mode to 'TransportCredentialOnly' makes everything working again. Is the setup I want even supported? If so, how should it be configured?

    Read the article

  • Can't get NUnit to work in Visual Web Develper 2010 express.

    - by UkraineTrain
    First off I was wondering whether it's possible to implement a functionality with Nunit where each time a project is created in Visual Web Developer 2010 I get a dialog asking whether I want to create a unit test project for current application like I saw it happen in the older versions of Visual Web Developer. I've tried just about everything to get NUnit 2.5.5 to work in Visual Web Developer 2010. For example, in nunit.exe.config I added under configuration <startup> <requiredRuntime version="v4.0.30319" /> </startup> and under runtime: <loadFromRemoteSources enabled="true" /> I then tried to launch nunit-console.exe in order to specify in the command line the option /framework=net-4.0, but the console would appear and instantly disappear. It didn't help when I tried running it as an administrator. I've also tried using Nunit as an external tool inside the Visual Web Developer by creating a toolbar as described in the following link: http://www.marthijnvandenheuvel.com/2010/06/09/using-nunit-in-visual-studio-2010/. It shows up as an icon in the toolbar. I ran my project called ToyStore and then clicked Nunit icon in order to launch it and it gave me a "System.IO.FileNotFoundException:Assembly not found:ToyStore.dll". So, needless to say, I'm pretty lost and don't know what to do and would greatly appreciate any help in getting Nunit to work.

    Read the article

  • Compressing a web service response for jQuery

    - by SirDemon
    I'm attempting to gzip a JSON response from an ASMX web service to be consumed on the client-side by jQuery. My web.config already has httpCompression set like so: (I'm using IIS 7) <httpCompression directory="%SystemDrive%\inetpub\temp\IIS Temporary Compressed Files" staticCompressionDisableCpuUsage="90" staticCompressionEnableCpuUsage="60" dynamicCompressionDisableCpuUsage="80" dynamicCompressionEnableCpuUsage="50"> <dynamicTypes> <add mimeType="application/javascript" enabled="true" /> <add mimeType="application/x-javascript" enabled="true" /> <add mimeType="text/css" enabled="true" /> <add mimeType="video/x-flv" enabled="true" /> <add mimeType="application/x-shockwave-flash" enabled="true" /> <add mimeType="text/javascript" enabled="true" /> <add mimeType="text/*" enabled="true" /> <add mimeType="application/json; charset=utf-8" enabled="true" /> </dynamicTypes> <staticTypes> <add mimeType="application/javascript" enabled="true" /> <add mimeType="application/x-javascript" enabled="true" /> <add mimeType="text/css" enabled="true" /> <add mimeType="video/x-flv" enabled="true" /> <add mimeType="application/x-shockwave-flash" enabled="true" /> <add mimeType="text/javascript" enabled="true" /> <add mimeType="text/*" enabled="true" /> </staticTypes> <scheme name="gzip" dll="%Windir%\system32\inetsrv\gzip.dll" /> </httpCompression> <urlCompression doDynamicCompression="true" doStaticCompression="true" /> Through fiddler I can see that normal aspx and other compressions work fine. However, the jQuery ajax request and response work as they should, only nothing gets compressed. What am I missing?

    Read the article

  • "no such file to load -- treetop/runtime" running "rake jobs:work"

    - by Ryan Marshall
    when i try and run the "rails server" or "rake jobs:work" i get the error: "no such file to load -- treetop/runtime" full trace: macbook-pro-2:domain ryan$ rake jobs:work --trace(in /Applications/htdocs/domain) rake aborted! no such file to load -- treetop/runtime /opt/local/lib/ruby/gems/1.8/gems/mail-2.2.14/lib/mail.rb:68:in require' /opt/local/lib/ruby/gems/1.8/gems/mail-2.2.14/lib/mail.rb:68 /opt/local/lib/ruby/gems/1.8/gems/mail-2.2.14/lib/mail.rb:61:ineach' /opt/local/lib/ruby/gems/1.8/gems/mail-2.2.14/lib/mail.rb:61 /opt/local/lib/ruby/gems/1.8/gems/delayed_job-2.1.2/lib/delayed/performable_mailer.rb:1:in require' /opt/local/lib/ruby/gems/1.8/gems/delayed_job-2.1.2/lib/delayed/performable_mailer.rb:1 /opt/local/lib/ruby/gems/1.8/gems/delayed_job-2.1.2/lib/delayed_job.rb:5:inrequire' /opt/local/lib/ruby/gems/1.8/gems/delayed_job-2.1.2/lib/delayed_job.rb:5 /opt/local/lib/ruby/gems/1.8/gems/bundler-1.0.7/lib/bundler/runtime.rb:64:in require' /opt/local/lib/ruby/gems/1.8/gems/bundler-1.0.7/lib/bundler/runtime.rb:64:inrequire' /opt/local/lib/ruby/gems/1.8/gems/bundler-1.0.7/lib/bundler/runtime.rb:62:in each' /opt/local/lib/ruby/gems/1.8/gems/bundler-1.0.7/lib/bundler/runtime.rb:62:inrequire' /opt/local/lib/ruby/gems/1.8/gems/bundler-1.0.7/lib/bundler/runtime.rb:51:in each' /opt/local/lib/ruby/gems/1.8/gems/bundler-1.0.7/lib/bundler/runtime.rb:51:inrequire' /opt/local/lib/ruby/gems/1.8/gems/bundler-1.0.7/lib/bundler.rb:112:in require' /ApApplications/htdocs/domain/config/application.rb:7 /opt/local/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:ingem_original_require' /opt/local/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in require' /Applications/htdocs/domain/Rakefile:4 /opt/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2383:inload' /opt/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2383:in raw_load_rakefile' /opt/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2017:inload_rakefile' /opt/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2068:in standard_exception_handling' /opt/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2016:inload_rakefile' /opt/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2000:in run' /opt/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2068:instandard_exception_handling' /opt/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:1998:in run' /opt/local/lib/ruby/gems/1.8/gems/rake-0.8.7/bin/rake:31 /opt/local/bin/rake:19:inload' /opt/local/bin/rake:19 in my Gemfile i have: "gem 'delayed_job'"

    Read the article

  • How can I prevent 'objects you are adding to the designer use a different data connection...'?

    - by Timothy Khouri
    I am using Visual Studio 2010, and I have a LINQ-to-SQL DBML file that my colleagues and I are using for this project. We have a connection string in the web.config file that the DBML is using. However, when I drag a new table from my "Server Explorer" onto the DBML file... I get presented with a dialog that demands that do one of these two options: Allow visual studio to change the connection string to match the one in my solution explorer. Cancel the operation (meaning, I don't get my table). I don't really care too much about the debate as why the PMs/devs who made this tool didn't allow a third option - "Create the object anyway - don't worry, I'm a developer!" What I am thinking would be a good solution is if I can create a connection in the Server Explorer - WITHOUT A WIZARD. If I can just paste a connection string, that would be awesome! Because then the DBML designer won't freak out on me :O) If anyone knows the answer to this question, or how to do the above, please lemme know!

    Read the article

  • Custom Error Handling

    - by Michael
    Using GoDaddy to host my site (I know that's my first problem)! :-) Trying to setup custom error messages for my site using IIS7. GoDaddy allows you to setup a 404 in their control panel, but I can't override this, or setup any additional error redirects, specifically a 500-server error. Here is my web.config file: <configuration> <system.webServer> <rewrite> <rules> <rule name="Redirect to WWW" stopProcessing="true"> <match url=".*" /> <conditions> <add input="{HTTP_HOST}" pattern="^mysite.com$" /> </conditions> <action type="Redirect" url="http://www.mysite.com/{R:0}" redirectType="Permanent" /> </rule> </rules> </rewrite> </system.webServer> <system.web> <customErrors mode="On" defaultRedirect="http://www.mysite.com/oops.php"> <error statusCode="404" redirect="http://www.mysite.com/oops.php?error=404" /> <error statusCode="500" redirect="http://www.mysite.com/oops.php?error=500" /> </customErrors> </system.web> </configuration>

    Read the article

  • "Cannot open user default database" error with "User Instance=True"

    - by Keith
    I have a desktop application that uses Sql user instancing. This is my connection string: "Data Source=.\SqlExpress; AttachDbFilename=C:\path\file.mdf; Integrated Security=True; User Instance=True; Connect Timeout=100;" My application creates this DB, downloads a load of data into it from a web service and then does a lot of actions with it. The problem comes when I attempt to re-open the connection. I get a SqlException: "Cannot open user default database. Login failed. Login failed for user 'myDomain\myusername'." This error makes no sense in this context - I have no default database. I'm logging in to an instance created just for the current application, running separately from SqlExpress. There's no other way to connect to this DB. If I start the SqlExpress service and connect to the default instance it won't be visible. It only exists for this application. The file on disk is locked by the SqlExpress instance service running under the application. if I stop the app and restart it the connection works first time, but fails on re-opening. If I just stop the app I can delete the .mdf files and begin again, but it still crashes when I re-open the connection. As my app started the instance running as me my current user should have access to every DB in the instance. This doesn't happen for other users of the same code, which suggests that it's a SQL config issue. Does anyone have any idea what causes this and how to work around it?

    Read the article

  • Git-svn refuses to create branch on svn repository error: "not in the same repository"

    - by Danny
    I am attempting to create a svn branch using git-svn. The repository was created with --stdlayout. Unfortunately it generates an error stating the "Source and dest appear not to be in the same repository". The error appears to be the result of it not including the username in the source url. $ git svn branch foo-as-bar -m "Attempt to make Foo into Bar." Copying svn+ssh://my.foo.company/r/sandbox/foo/trunk at r1173 to svn+ssh://[email protected]/r/sandbox/foo/branches/foo-as-bar... Trying to use an unsupported feature: Source and dest appear not to be in the same repository (src: 'svn+ssh://my.foo.company/r/sandbox/foo/trunk'; dst: 'svn+ssh://[email protected]/r/sandbox/foo/branches/foo-as-bar') at /home/me/.install/git/libexec/git-core/git-svn line 610 I intially thought this was simply a configuration issue, examination of .git/config doesn't suggest anything incorrect. [svn-remote "svn"] url = svn+ssh://[email protected]/r fetch = sandbox/foo/trunk:refs/remotes/trunk branches = sandbox/foo/branches/*:refs/remotes/* tags = sandbox/foo/tags/*:refs/remotes/tags/* I am using git version 1.6.3.3. Can anyone shed any light on why this might be occuring, and how best to address it?

    Read the article

  • ACL actions tag cause 'roles resource tree' draw incorrectly in admin/system/permissions/roles

    - by latvian
    Hi, We created new action similar to 'hold', 'ship' and others in the 'sales_order/view' admin section that can be triggered by clicking at the button. Afterward, we added our new action to the ACL with the following code in config.xml: <acl> <resources> <admin> <children> <sales> <children> <order> <children> <actions translate="title"> <title>Actions</title> <children> <shipNew translate="title"><title>Ship Ups</title></shipNew> </children> </actions> </children> <sort_order>10</sort_order> </order> </children> </sales> </children> </admin> </resources> </acl> ACL functionality works, however, in the 'Resources Tree'(System/Permissions/Roles/Role Resources) our new action does never show up as selected(checked) even thou it is allowed for particular Role. I can see that from table 'admin_rule' with resource id for our new action that it is allowed, so it needs to be selected, but it is not. When trying to solve this issue i looked into the template(permissions/rolesedit.phtml) and I found that the 'resource tree' is draw with Javascript...thats where i got stock due to my limited knowledge in Javascript. Why the resource tree does not display our new ACL entry correctly, that is the check box is never checked? Thank You for helping margots

    Read the article

  • URLRewriter.net fails relative paths when using more than one substring in URL

    - by Andreas Strandfelt
    Hi. I have installed the URLRewriter on my server, and it works fine, but I have a rather big problem. Relative links in hyperlinks, CSS-links, images etc. doesn't work when I have URLs with more than one substring. E.g. (sorry, no http:// in front, as I do not have enough reputation): dkbyg.strandweb.dk/Leje-og-udlejning-arbejdskraft leads to the path dkbyg.strandweb.dk/Workers.aspx and works just fine. But dkbyg.strandweb.dk/Leje-og-udlejning-arbejdskraft/Midtjylland leads to dkbyg.strandweb.dk/Workers.aspx?Region=Midtjylland using this line in the Web.config: <rewrite url="~/Leje-og-udlejning-arbejdskraft/(.+)" to="~/Workers.aspx?Region=$1"/> It rewrites just fine, but my relative links doesn't work anymore. CSS, Images, links and so on thinks my root is now http://dkbyg.strandweb.dk/Leje-og-udlejning-arbejdskraft, which of course doesn't exist. Can't this be fixed? All my links are correctly set using the ~/, like this: <asp:HyperLink ID="HyperLink3" CssClass="black_text" NavigateUrl="~/Forgot-Password" runat="server">I have forgotten my password</asp:HyperLink>

    Read the article

  • Django-modpython project in a directory

    - by Ankit Jaiswal
    Hi All, I am deploying a Django project on apache server with mod_python in linux. I have created a directory structure like: /var/www/html/django/demoInstall where demoInstall is my project. In the httpd.conf I have put the following code. <Location "/django/demoInstall"> SetHandler python-program PythonHandler django.core.handlers.modpython SetEnv DJANGO_SETTINGS_MODULE demoInstall.settings PythonOption django.root django/demoInstall PythonDebug On PythonPath "['/var/www/html/django'] + sys.path" </Location> It is getting me the django environment but the issue is that the urls mentioned in urls.py are not working correctly. In my url file I have mentioned the url like: (r'^$', views.index), Now, in the browser I am putting the url like : http://domainname/django/demoInstall/ and I am expecting the views.index to be invoked. But I guess it is expecting the url to be only: http://domainname/ . When I change the url mapping to: (r'^django/demoInstall$', views.index), it works fine. Please suggest as I do not want to change all the mappings in url config file. Thanks in advance.

    Read the article

  • Why does IIS respond to a secure(SSL) page request with a 302 to its non-secure version?

    - by ISawrub
    I have SSL installed at the root of a server. I have a page whose code behind code is supposed to redirect after certain validation to a secure page. Here's the redirect code: switch (PageBase2.GetParameterValue("Environment")) //Retrieves App Setting named Environment from web.config { case "Server": strURL = @"https://" + HttpContext.Current.Request.Url.Authority + "/checkout/payment.aspx"; break; case "Local": strURL = @"http://" + HttpContext.Current.Request.Url.Authority + "/checkout/payment.aspx"; break; default: strURL = @"https://" + HttpContext.Current.Request.Url.Authority + "/checkout/payment.aspx"; break; } Response.Redirect(strURL, false); But the page that's been served by IIS is non-secure. I looked at the firebug console and it appears that the client does make a get request to https://server/checkout/payment.aspx but IIS responds with a 302 to http://server/checkout/payment.aspx Any clues, as to what could be causing it. I've even tried forcing SSL for the page, but it doesn't work I get 403.4 error. (SSL is required to view this resource.) And if i remove the redirection logic and code the payment page to redirect to its SSL version when the connection is not secure using Request.IsSecureConnection, i end up with an endless redirect loop, simply because IIS still won't serve the secure version without a 302. Any ideas?

    Read the article

  • Exception during secure communication implementation

    - by Liran
    hi everyone. im trying to implement simple secured client server communiction using WCF. when im launching mt server everty thing is OK , But when im launching my client im getting this error: Error : An error occurred while making the HTTP request to https://localhost:800 0/ExchangeService. This could be due to the fact that the server certificate is not configured properly with HTTP.SYS in the HTTPS case. This could also be caus ed by a mismatch of the security binding between the client and the server. this is the server code : Uri address = new Uri("https://localhost:8000/ExchangeService"); WSHttpBinding binding = new WSHttpBinding(); //Set Binding Params binding.Security.Mode = SecurityMode.Transport; binding.Security.Transport.ClientCredentialType = HttpClientCredentialType.None; binding.Security.Transport.ProxyCredentialType = HttpProxyCredentialType.None; Type contract = typeof(ExchangeService.ServiceContract.ITradeService); ServiceHost host = new ServiceHost(typeof(TradeService)); host.AddServiceEndpoint(contract, binding, address); host.Open(); this is the client configuration (app.config): </client> <bindings> <wsHttpBinding> <binding name="TradeWsHttpBinding"> <security mode="Transport"> <transport clientCredentialType="None" proxyCredentialType ="None"/> </security> </binding> </wsHttpBinding> </bindings> the security configuration at both the client and the server are the same , and i dont need certificate for the server in that kind of security (transport) so why do i get this exception ???? thanks...

    Read the article

< Previous Page | 380 381 382 383 384 385 386 387 388 389 390 391  | Next Page >