Search Results

Search found 9715 results on 389 pages for 'servers'.

Page 346/389 | < Previous Page | 342 343 344 345 346 347 348 349 350 351 352 353  | Next Page >

  • C# COM Cross Thread problem

    - by user364676
    Hi, we're developing a software to control a scientific measuring device. it provides a COM-Interface defines serveral functions to set measurement parameters and fires an event when it measured data. in order to test our software, i'm implementing a simulation of that device. the com-object runs a loop which periodically fires the event. another loop in the client app should now setup up the com-simulator using the given functions. i created a class for measuring parameters which will be instanciated when setting up a new measurement. // COM-Object public class MeasurementParams { public double Param1; public double Param2; } public class COM_Sim : ICOMDevice { public MeasurementParams newMeasurement; IClient client; public int NewMeasurement() { newMeasurment = new MeasurementParam(); } public int SetParam1(double val) { // why is newMeasurement null when method is called from client loop newMeasurement.Param1 = val; } void loop() { while(true) { // fire event client.HandleEvent; } } } public class Client : IClient { ICOMDevice server; public int HandleEvent() { // handle this event server.NewMeasurement(); server.SetParam1(0.0); } void loop() { while(true) { // do some stuff... server.NewMeasurement(); server.SetParam1(0.0); } } } both of the loops run in independent threads. when server.NewMeasurement() is called, the object on the server is set to a new instance. but in the next function, the object is null again. do the same when handling the server-event, it works perfectly, because the method runs in the servers thread. how to make it work from client-thread as well. as the client is meant to be working with the real device, i cannot modify the interfaces given by the manufactor. also i need to setup measurements independent from the event-handler, which will be fired not regulary. i assume this problem related to multithreaded-COM behavior but i found nothing on this topic.

    Read the article

  • How to properly develop and deploy features for existing asp.net applications on IIS

    - by Tomh
    My question actually consists of multiple questions. I'm frequently reading about companies who deploy a small subset of features for a select amount of customers using the live "database". Ruby on Rails and its ecosystem have deployment tools and database migrations to deploy or rollback such features in a live production or staging environment. My question, how is this done for an asp.net (mvc in particular) application? How do you test your newly released features against live data? Do you have any tools to modify the existing database and roll back changes if necessary? Do you make backups before deployment? Update Maybe I should point out that my question is not really clear, getting more answers here will help me phrase the question better. To make it easier I will describe a situation I'm commonly seeing with some of my clients. My clients have large deployments of popular web applications. They do not have staging/QA/testing servers. (yes this is not optimal). The data their apps consist of are images, xml files, user uploads and data in Sql Server. Having a few records, of their production database and a couple of dummy files is not a substitute of testing against real data in my opinion. How would you design a workflow that can create a acceptable environment to mimic a production environment before going live?

    Read the article

  • ruby on rails: undefined method "version_requirements' when attempting to start server after new install

    - by ezabak
    Hi there, I had to newly install ruby on rails recently. When I attempted to start the server for a project I had already been working on previous to this new install, I received the following error: $ ruby script/server => Booting WEBrick... ./script/../config/../vendor/rails/railties/lib/rails/gem_dependency.rb:107:in `requirement': undefined method `version_requirements' for #<Gem::Dependency:0xb74bf764> (NoMethodError) from ./script/../config/../vendor/rails/railties/lib/initializer.rb:292:in `check_gem_dependencies' from ./script/../config/../vendor/rails/railties/lib/initializer.rb:292:in `map' from ./script/../config/../vendor/rails/railties/lib/initializer.rb:292:in `check_gem_dependencies' from ./script/../config/../vendor/rails/railties/lib/initializer.rb:165:in `process' from ./script/../config/../vendor/rails/railties/lib/initializer.rb:112:in `send' from ./script/../config/../vendor/rails/railties/lib/initializer.rb:112:in `run' from /media/78C0-455B/bidmc/schedule/config/environment.rb:13 from /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:36:in `gem_original_require' from /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:36:in `require' from /media/78C0-455B/bidmc/schedule/vendor/rails/activesupport/lib/active_support/dependencies.rb:153:in `require' from /media/78C0-455B/bidmc/schedule/vendor/rails/activesupport/lib/active_support/dependencies.rb:521:in `new_constants_in' from /media/78C0-455B/bidmc/schedule/vendor/rails/activesupport/lib/active_support/dependencies.rb:153:in `require' from /media/78C0-455B/bidmc/schedule/vendor/rails/railties/lib/commands/servers/webrick.rb:59 from /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:36:in `gem_original_require' from /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:36:in `require' from /media/78C0-455B/bidmc/schedule/vendor/rails/activesupport/lib/active_support/dependencies.rb:153:in `require' from /media/78C0-455B/bidmc/schedule/vendor/rails/activesupport/lib/active_support/dependencies.rb:521:in `new_constants_in' from /media/78C0-455B/bidmc/schedule/vendor/rails/activesupport/lib/active_support/dependencies.rb:153:in `require' from /media/78C0-455B/bidmc/schedule/vendor/rails/railties/lib/commands/server.rb:49 from /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:36:in `gem_original_require' from /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:36:in `require' from script/server:3 I have the latest versions of ruby, rubygems, and rails. Any suggestions? Thanks.

    Read the article

  • Modify shell script to monitor/ping multiple ip addresses

    - by Alex
    Alright so I need to constantly monitor multiple routers and computers, to make sure they remain online. I have found a great script here that will notify me via growl(so i can get instant notifications on my phone) if a single ip cannot be pinged. I have been attempting to modify the script to ping multiple addresses, with little luck. I'm having trouble trying to figure out how to ping a down server while the script keeps watching the online servers. any help would be greatly appreciated. I haven't done much shell scripting so this is quite new to me. Thanks #!/bin/sh #Growl my Router alive! #2010 by zionthelion73 [at] gmail . com #use it for free #redistribute or modify but keep these comments #not for commercial purposes iconpath="/path/to/router/icon/file/internet.png" # path must be absolute or in "./path" form but relative to growlnotify position # document icon is used, not document content # Put the IP address of your router here localip=192.168.1.1 clear echo 'Router avaiability notification with Growl' #variable avaiable=false com="################" #comment prefix for logging porpouse while true; do if $avaiable then echo "$com 1) $localip avaiable $com" echo "1" while ping -c 1 -t 2 $localip do sleep 5 done growlnotify -s -I $iconpath -m "$localip is offline" avaiable=false else echo "$com 2) $localip not avaiable $com" #try to ping the router untill it come back and notify it while !(ping -c 1 -t 2 $localip) do echo "$com trying.... $com" sleep 5 done echo "$com found $localip $com" growlnotify -s -I $iconpath -m "$localip is online" avaiable=true fi sleep 5 done

    Read the article

  • Images in database vs file system

    - by Jesse
    We have a project coming up where we will be building a whole backend CMS system that will power our entire extranet and intranet with one package. The question I have been trying to find an answer to is which is better: storing images in the database (SQL Server 2005) so we may have integrity, single replication plan, etc OR storing on the file system? One issue we have is that we have multiple servers load balanced that require to have the same data at all times. As of now we have SQL replication taking care of that but file replication seems to be a little tougher. Another concern we have is that we would like to have multiple resolutions of the same image, we are not sure if creating and storing each version on the file system would be best or maybe dynamically pulling and creating the resolution image we would like upon request. Our concerns are the with the following: Data integrity Data replication Multiple resolutions Speed of database vs file system Overhead load of database vs file system Data management and backup Does anyone have a similar situation or have any input on what would be recommended? Thanks in advance for the help!

    Read the article

  • localhost not going to desired VirtualHost

    - by ladaghini
    I have several VirtalHosts set up on my computer. I'd like to visit the site I'm currently working on from a different PC using the my comp's ip address, but every config i've tried keeps taking me to a different virtual host (in fact the first virtualhost I set up on my comp). How do I set up the apache virtualhost configs to ensure that the ip address takes me to the site I want it to. /etc/apache2/sites-available/site-i-want-to-show-up-with-ip-address.conf contains: <VirtualHost *:80> ServerAdmin webmaster@localhost ServerAlias currentsite.com DocumentRoot /path/to/root/of/site-i-want-to-show-up ServerName localhost ScriptAlias /awstats/ /usr/lib/cgi-bin/ CustomLog /var/log/apache2/current-site-access.log combined </VirtualHost> And /etc/apache2/sites-available/site-that-keeps-showing-up.conf contains: <VirtualHost *:80> ServerAdmin webmaster@localhost ServerAlias theothersite.com DocumentRoot /path/to/it <Directory /> Options FollowSymLinks AllowOverride None </Directory> </VirtualHost> I'd appreciate anyone's help. Also, I don't know too much about configuring web servers, and I used tutorials to get the above code.

    Read the article

  • WMI Query Script as a Job

    - by Kenneth
    I have two scripts. One calls the other with a list of servers as parameters. The second query is designed to execute a WMI query. When I run it manually, it does this perfectly. When I try to run it as a job it hangs forever and I have to remove it. For the sake of space here is the relevant part of the calling script: ProcessServers.ps1 Start-Job -FilePath .\GetServerDetailsLight.ps1 -ArgumentList $sqlsrv,$destdb,$server,$instance GetServerDetailsLight.ps1 param($sqlsrv,$destdb,$server,$instance) $password = get-content C:\SQLPS\auth.txt | convertto-securestring $credentials = new-object -typename System.Management.Automation.PSCredential -argumentlist "DOMAIN\MYUSER",$password [System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer.SMO') $box_id = 0; if ($sqlsrv.length -eq 0) { write-output "No data passed" break } function getinfo { param( [string]$svr, [string]$inst ) "Entered GetInfo with: $svr,$inst" $cs = get-wmiobject win32_operatingsystem -computername $svr -credential $credentials -authentication 6 -Verbose -Debug | select Name, Model, Manufacturer, Description, DNSHostName, Domain, DomainRole, PartOfDomain, NumberOfProcessors, SystemType, TotalPhysicalMemory, UserName, Workgroup write-output "WMI Results: $cs" } getinfo $server $instance write-output "Complete" Executed as a job it will show as 'running' forever: PS C:\sqlps> Start-Job -FilePath .\GetServerDetailsLight.ps1 -ArgumentList DBSERVER,LOGDB,SERVER01,SERVER01 Id Name State HasMoreData Location Command -- ---- ----- ----------- -------- ------- 21 Job21 Running True localhost param($sqlsrv,$destdb,... GAC Version Location --- ------- -------- True v2.0.50727 C:\WINDOWS\assembly\GAC_MSIL\Microsoft.SqlServer.Smo\10.0.0.0__89845dcd8080cc91\Microsoft.SqlServer.Smo.dll getinfo MSDCHR01 MSDCHR01 Entered GetInfo with: SERVER01,SERVER01 The last output I ever get is the 'Entered GetInfo with: SERVER01,SERVER01'. If I run it manually like so: PS C:\sqlps> .\GetServerDetailsLight.ps1 DBSERVER LOGDB SERVER01 SERVER01 The WMI query executes just as expected. I am trying to determine why this is, or at least a useful way to trap errors from within jobs. Thanks!

    Read the article

  • Speeding up a soap powered website

    - by ChrisRamakers
    Hi all, We're currently looking into doing some performance tweaking on a website which relies heavily on a Soap webservice. But ... our servers are located in Belgium and the webservice we connect to is locate in San Francisco so it's a long distance connection to say the least. Our website is PHP powered, using PHP's built in SoapClient class. On average a call to the webservice takes 0.7 seconds and we are doing about 3-5 requests per page. All possible request/response caching is already implemented so we are now looking at other ways to improved the connection speed. This is the code which instantiates the SoapClient, what i'm looking for now is other ways/methods to improve speed on single requestes. Anyone has idea's or suggestions? private function _createClient() { try { $wsdl = sprintf($this->config->wsUrl.'?wsdl', $this->wsdl); $client = new SoapClient($wsdl, array( 'soap_version' => SOAP_1_1, 'encoding' => 'utf-8', 'connection_timeout' => 5, 'cache_wsdl' => 1, 'trace' => 1, 'features' => SOAP_SINGLE_ELEMENT_ARRAYS )); $header_tags = array('username' => new SOAPVar($this->config->wsUsername, XSD_STRING, null, null, null, $this->ns), 'password' => new SOAPVar(md5($this->config->wsPassword), XSD_STRING, null, null, null, $this->ns)); $header_body = new SOAPVar($header_tags, SOAP_ENC_OBJECT); $header = new SOAPHeader($this->ns, 'AuthHeaderElement', $header_body); $client->__setSoapHeaders($header); } catch (SoapFault $e){ controller('Error')->error($id.': Webservice connection error '.$e->getCode()); exit; } $this->client = $client; return $this->client; }

    Read the article

  • Looking for a good dev environment for OSGi bundles

    - by Riduidel
    Hi, I'm currently investigating in the field of dev environment for OSGi bundles. My goal is to find a way to develop, test and debug with ease the bundles I'll be coding. Besides, I have some "cultural" requirements. I want to be able to use java continuous integration servers (typically, Hudson) As a consequence of that first requirement, I want to have a repeatable, one-click build process. My typical tool for that is maven. And finally, being long-term Eclipse user, and having the m2eclipse at hand to merge my eclipse env with my maven one, I obviously want to be able to test and debug with that IDE. So far, here are the infos I know I can use (and have already tested) maven-bundle-plugin, maven-ipojo-plugin which both offer clean packaging facilities I have tested maven pax (and eclipse pax) and am not really satisfied with both : maven pax generates a very heavy project, where adding dependencies is very error-prone (the maven pax:import-bundle command line, with all its arguments, is a hell per se) I have taken a look at Karaf, which seems to have some nice direct maven provisionning, but I don't know how to integrate it with my Eclipse, besides using the traditionnal JPDA bridge. However, it seems to be more production-oriented than dev-oriented, and as such may require heavy configuration to fit my need (although the reading of its user manual doesn't revedal that). Have you got any ideas ? Some maven/eclipse plugins ?

    Read the article

  • How do I fix a broken connection to DB2 from a web application?

    - by Eddie White
    I support some old web applications, VBScript-based ASP for the UI and VB6 COM modules for the business and data access layers. Last weekend, I installed DB2 Connect Enterprise Edition v8 fixpack 14 on several Windows 2000 servers, and one of the web apps errors out on null data when it calls the built in VBScript function FormatNumber. This numeric data is retrieved by a SQL Server query, but the only way the SQL Server column is populated is with the calculated results returned from a DB2 query earlier in a progression through several pages. When I installed DB2 Connect EE, one of the components loaded was MDAC 2.7. I followed corporate instructions and had the installation save an ODBC System Data Source, which reported a good connection when I tested it after the install. For what it's worth, the project references in the production VB6 modules pointed to MDAC 2.5. I have tried recompiling and deploying to COM on my test server new versions of the VB6 modules referencing MDAC 2.7. My development environment is Windows XP Pro, with MDAC 2.8 and DB2 Connect EE v9.5 installed. When I deployed the updated VB6 dlls, the CreateObject fails to instantiate the classes with the error message that "The class does not support automation or the requested interface". I've rolled the DB2 Connect install back and have reinstall v8 of the DB2 runtime client, which was the previous environment. The problem, however, persists.

    Read the article

  • In WMI, can I use a join (or something similar) to acquire the IisWebServer object for a site, given

    - by Precipitous
    Given a server name and a physical path, I'd like to be able to hunt down the IISWebServer object and ApplicationPool. Website url is also an acceptable input. Our technologies are IIS 6, WMI, and access via C# or Powershell 2. I'm certain this would be easier with IIS 7 its managed API. We don't have that yet. Here's what I can do: Get a list of IIS virtual directories from IISWebVirtualDirSetting and filter (offline) for the matching physical path. $theVirtualDir = gwmi -Namespace "root/MicrosoftIISv2" ` -ComputerName $servername -authentication PacketPrivacy ` -class "IISWebVirtualDirSetting" ` | where-object {$_.Path -like $deployLocation} From the virtual directory object, I can get a name (like W3SVC/40565456/root). Given this name, I can get to other goodies, such as the IIS web server object. gwmi -Namespace "root/MicrosoftIISv2" ` -ComputerName $servername ` -authentication PacketPrivacy ` -Query "SELECT * FROM IisWebServer WHERE Name='W3SVC/40589473'" The questions, restated: 1) This is a query language. Can I join or subquery so that 1 WMI query statement gets web servers based on IISWebVirtualDir.Path? How? 2) In solving 1, you'll have to explain how to query on the Path property. Why is this an invalid query? "SELECT * FROM IISWebVirtualDirSetting WHERE Path='D:\sites\globaldominator'"

    Read the article

  • use exec for dsadd

    - by Daryl Gill
    I'm Programming on a Windows Server 2008 and I wish to have a WebUI to interact with the domains active directory. One of my main problems is this that i'm using dsadd from a HTML form but this is no succeeding. I know my command is correct, I have tested it out on the Servers Command line My Code is As Below: if (isset($_POST['Submit'])) { $DesiredUsername = $_POST['DesiredUsername']; $DesiredPassword = $_POST['DesiredPassword']; $DU = "{$DesiredUsername}"; // Desired Username $OU = "PHPCreatedUsers"; // Domain OU $DC1 = "slayerserv"; // Domain Part one $DC2 = "local"; // Domain Part Two $PWD = "{$DesiredPassword}"; // Password $ExecScript = 'dsadd user cn=$DesiredUsername,cn=PHPCreatedUsers,dc=slayerserv,dc=local -disabled no -pwd $DesiredPassword -mustchpwd yes'; exec($ExecScript, $output); mysql_query("INSERT INTO addedusers (`ID`, `DU`, `OU`, `DC1`, `DC2, `PWD`) VALUES ('', '$DU', '$OU', '$DC1', '$DC2', '$PWD')"); echo "<br><br>"; print_r($output); # echo "User: $DesiredUsername Has been Created"; } When I print_r($output); it Returns a blank array: Array ( ) Could anyone provide me with a solution or point me in the right direction? ++++ Below is a working example of my usage of exec $Script = 'ping 127.0.0.1 -n 1'; exec($Script, $Output); print_r($Output); print_r($Output); Gives: Array ( [0] = [1] = Pinging 127.0.0.1 with 32 bytes of data: [2] = Reply from 127.0.0.1: bytes=32 time<1ms TTL=128 [3] = [4] = Ping statistics for 127.0.0.1: [5] = Packets: Sent = 1, Received = 1, Lost = 0 (0% loss), [6] = Approximate round trip times in milli-seconds: [7] = Minimum = 0ms, Maximum = 0ms, Average = 0ms )

    Read the article

  • How to deploy SQL Reporting 2005 when Data Sources are locked?

    - by spoulson
    The DBAs here maintain all SQL Server and SQL Reporting servers. I have a custom developed SQL Reporting 2005 project in Visual Studio that runs fine on my local SQL Database and Reporting instances. I need to deploy to a production server, so I had a folder created on a SQL Reporting 2005 server with permissions to upload files. Normally, a deploy from within Visual Studio is all that is needed to upload the report files. However, for security purposes, data sources are maintained explicitly by DBAs and stored in a separated locked down common folder on the reporting server. I had them create the data source for me. When I attempt to deploy from VS, it gives me the error "The item '/Data Sources' already exists." I get this whether I'm deploying the whole project or just a single report file. I already set OverwriteDataSources=false in the project properties. The TargetServer URL and folder are verified correct. I suppose I could copy the files manually, but I'd like to be able to deploy from within VS. What could I be doing wrong?

    Read the article

  • Advice on a DB that can be uploaded to a website by a smart client for collecting survey feedback

    - by absfabs
    Hello, I'm hoping you can help. I'm looking for a zero config multi-user datbase that my winforms application can easily upload to a webserver folder (together with 1 or 2 classic asp pages) and am looking for some suggestions/recommendations. The idea is that the database will be used to collect feedback entered by people filling in the asp pages. The pages will write to the database using javascript. The database will subsequently be downloaded again for processing once the responses are in. In Summary: It will mostly run in MS Windows environments. I have a modest budget for this and do not mind paying for such a database. No runtime licensing costs. Should be xcopy - Once uploaded to a website folder it should be operational. It should not have a dotnet CLR dependency. It should support a resonable level of concurrent access. Average respondent count would be around 20-30 but one never knows. Should be a reasonable size so that uploads/downloads to and from the site will be reasonably fast. Would appreciate your suggestions/comments Many thanks Abz To clarify - this is a desktop commercial application for feedback management in a vertical market. It uses SQL Server as the backing store. The application currently provides feedback management from email and paper feedback. I now want to add web feedback capability. Getting users to to make their SQL servers accessible to a website is not at option at this time as I am want to make getting up and running as painless as possible. I intend to release a web based implementation of the software in the near future but for now am looking at the above as a pragmatic way to provide web based feedback collection.

    Read the article

  • A couple of questions about NHibernate's GuidCombGenerator

    - by Eyvind
    The following code can be found in the NHibernate.Id.GuidCombGenerator class. The algorithm creates sequential (comb) guids based on combining a "random" guid with a DateTime. I have a couple of questions related to the lines that I have marked with *1) and *2) below: private Guid GenerateComb() { byte[] guidArray = Guid.NewGuid().ToByteArray(); // *1) DateTime baseDate = new DateTime(1900, 1, 1); DateTime now = DateTime.Now; // Get the days and milliseconds which will be used to build the byte string TimeSpan days = new TimeSpan(now.Ticks - baseDate.Ticks); TimeSpan msecs = now.TimeOfDay; // *2) // Convert to a byte array // Note that SQL Server is accurate to 1/300th of a millisecond so we divide by 3.333333 byte[] daysArray = BitConverter.GetBytes(days.Days); byte[] msecsArray = BitConverter.GetBytes((long) (msecs.TotalMilliseconds / 3.333333)); // Reverse the bytes to match SQL Servers ordering Array.Reverse(daysArray); Array.Reverse(msecsArray); // Copy the bytes into the guid Array.Copy(daysArray, daysArray.Length - 2, guidArray, guidArray.Length - 6, 2); Array.Copy(msecsArray, msecsArray.Length - 4, guidArray, guidArray.Length - 4, 4); return new Guid(guidArray); } First of all, for *1), wouldn't it be better to have a more recent date as the baseDate, e.g. 2000-01-01, so as to make room for more values in the future? Regarding *2), why would we care about the accuracy for DateTimes in SQL Server, when we only are interested in the bytes of the datetime anyway, and never intend to store the value in an SQL Server datetime field? Wouldn't it be better to use all the accuracy available from DateTime.Now?

    Read the article

  • How to terminate a particular Azure worker role instance

    - by Oliver Bock
    Background I am trying to work out the best structure for an Azure application. Each of my worker roles will spin up multiple long-running jobs. Over time I can transfer jobs from one instance to another by switching them to a readonly mode on the source instance, spinning them up on the target instance, and then spinning the original down on the source instance. If I have too many jobs then I can tell Azure to spin up extra role instance, and use them for new jobs. Conversely if my load drops (e.g. during the night) then I can consolidate outstanding jobs to a few machines and tell Azure to give me fewer instances. The trouble is that (as I understand it) Azure provides no mechanism to allow me to decide which instance to stop. Thus I cannot know which servers to consolidate onto, and some of my jobs will die when their instance stops, causing delays for users while I restart those jobs on surviving instances. Idea 1: I decide which instance to stop, and return from its Run(). I then tell Azure to reduce my instance count by one, and hope it concludes that the broken instance is a good candidate. Has anyone tried anything like this? Idea 2: I predefine a whole bunch of different worker roles, with identical contents. I can individually stop and start them by switching their instance count from zero to one, and back again. I think this idea would work, but I don't like it because it seems to go against the natural Azure way of doing things, and because it involves me in a lot of extra bookkeeping to manage the extra worker roles. Idea 3: Live with it. Any better ideas?

    Read the article

  • Cassandra hot keyspace structure change

    - by Pierre
    Hello. I'm currently running a 12-node Cassandra cluster storing 4TB of data, with a replication factor set to 3. For the needs of an application update, we need to change the configuration of our keyspace, and we'd like to avoid any downtime if possible. I read on a mailing list that the best way to do it is to: Kill cassandra process on one server of the cluster Start it again, wait for the commit log to be written on the disk, and kill it again Make the modifications in the storage.xml file Rename or delete the files in the data directories according to the changes we made Start cassandra Goto 1 with next server on the list My questions would be: Did I understand the process well? Is there any risk of data corruption? During the process, there will be servers with different versions of the storage.xml file in the same cluser, same keyspace. Is it a problem? Same question as above if we not only add, rename and remove ColumnFamilies, but if we change the CompareWith parameter / transform an existing column family into a super one. Or do we need to change the name? Thank you for your answers. It's the first time I'll do this, and I'm a little bit scared.

    Read the article

  • How much should the AppDelegate do?

    - by Rudiger
    I'm designing quite a large App and on startup it will create sessions with a few different servers. As they are creating a session which is used across all parts of the app its something I thought would be best in App Delegate. But the problem is I need the session progress to be represented on the screen. I plan to have a UIToolBar at the bottom of the main menu which I don't want to cover with the progress bar but cover the UIView above it.So the way I see it I could do it a few different ways. 1) Have the App Delegate establish the sessions and report the progress to the main menu class so it can represent it in the progress bar (will I have any issues doing this if the sessions are created in a separate thread?), 2) have the App delegate display the main menu (UIView with a bunch of buttons and UIToolBar) and have it track and display the progress (I have never displayed anything in the App Delegate but assume you can do this but its not recommended) or 3) have the App Delegate just push the main menu and have the mainMenu class create the sessions and display the progress bar. 4) I think the other way to do it is to create the sessions in a delegate class and have the delegate set to mainMenu rather than self (AppDelegate), although I've never used anything other then self so not sure if this will work or if I will be able to close the thread (through calling super maybe?) as its running in the AppDelegate rather than the delegate of the class. As I've kinda said before the sessions are being created in a class in a separate thread so it wont lock the UI and I think the best way is the first but am I going to have issues having it running in a separate thread, reporting back to the app delegate and then sending that message to the mainMenu view? I hope that all makes sense, let me know if you need any further clarification. Any information is appreciated Cheers,

    Read the article

  • BN_hex2bn magically segfaults in openSSL

    - by xunil154
    Greetings, this is my first post on stackoverflow, and i'm sorry if its a bit long. I'm trying to build a handshake protocol for my own project and am having issues with the server converting the clients RSA's public key to a Bignum. It works in my clent code, but the server segfaults when attempting to convert the hex value of the clients public RSA to a bignum. I have already checked that there is no garbidge before or after the RSA data, and have looked online, but i'm stuck. header segment: typedef struct KEYS { RSA *serv; char* serv_pub; int pub_size; RSA *clnt; } KEYS; KEYS keys; Initializing function: // Generates and validates the servers key /* code for generating server RSA left out, it's working */ //Set client exponent keys.clnt = 0; keys.clnt = RSA_new(); BN_dec2bn(&keys.clnt->e, RSA_E_S); // RSA_E_S contains the public exponent Problem code (in Network::server_handshake): // *Recieved an encrypted message from the network and decrypt into 'buffer' (1024 byte long)* cout << "Assigning clients RSA" << endl; // I have verified that 'buffer' contains the proper key if (BN_hex2bn(&keys.clnt->n, buffer) < 0) { Error("ERROR reading server RSA"); } cout << "clients RSA has been assigned" << endl; The program segfaults at BN_hex2bn(&keys.clnt->n, buffer) with the error (valgrind output) Invalid read of size 8 at 0x50DBF9F: BN_hex2bn (in /usr/lib/libcrypto.so.0.9.8) by 0x40F23E: Network::server_handshake() (Network.cpp:177) by 0x40EF42: Network::startNet() (Network.cpp:126) by 0x403C38: main (server.cpp:51) Address 0x20 is not stack'd, malloc'd or (recently) free'd Process terminating with default action of signal 11 (SIGSEGV) Access not within mapped region at address 0x20 at 0x50DBF9F: BN_hex2bn (in /usr/lib/libcrypto.so.0.9.8) And I don't know why it is, Im using the exact same code in the client program, and it works just fine. Any input is greatly appriciated!

    Read the article

  • Apache HTTP and WEblogic Plug-in Location Directive question

    - by user275633
    All, We are using Weblogic Portal and Apache 2.x http server with the weblogic plug-in for apache for load-balancing. We have an application that right now can only be accessed from one of our managed servers. What I would like to do is us the Location directive to direct all requests for that page to the one managed server and I can't get it to work. The context that the portal tries to forward to is something like /MyWebApp?portalusername= (where equals a legitimate user. For example /MyWebApp?portalusername=joesmith All other applications and the plug-in is load balancing as expected because every now and then you'll get sent to teh second managed server for this particular application and its not deployed. I tried various things in the apache http.conf like the following but can't seem to get it work? Any suggestions? The followign is a snippet of the httpd.conf: (Note formatting did not come thru properly but it is formatted correctly in my httpd.conf) Blockquote Location /MyWebApp SetHandler weblogic-handler WebLogicCluster myserver:7011 Location Blockquote Location / SetHandler weblogic-handler WebLogicCluster myserver:7011, myserver2:7012 Location Thanks in advance.

    Read the article

  • Session cookie not being created in Rails, very rarely and frustratingly.

    - by James
    Hi everyone, This is an issue sporadically for very few users, however we haven't been able to replicate it. However I have now got a Chrome instance (Mac) which is reproducing the error (for some unknown reason), and I hope to not restart it until I have this nailed! Rails application, using memcached for session store. While the bug manifests in the _app_session_id cookie not being created, our javascript-generated cookie test and app-generated language cookies are being created successfully. This means that 422 / InvalidAuthToken errors are thrown for every form that is submitted by those afflicted - people can't log into the app. The error occurs across all browsers - had reports for IE7 and Firefox (which most users use). Switching to another browser often fixes the issue (though not always), and standard cache-cookie-clear tactics do not. So now that I have got Chrome open which is having the same issue - in development, staging and live environments (meaning http and https). All other browsers are fine. I've restarted the servers and restarted memcached. I don't really want to restart Chrome - in the risk that the issue does go away with that (having said that, it hasn't worked for users). I've been tcpdumping the requests - and although I'll keep digging, I'd love it if anyone had any suggestions, places to start looking, anything. This is really painful ;) Thanks!

    Read the article

  • HTTP: can GET and POST requests from a same machine come from different IPs?

    - by NoozNooz42
    I'm pretty sure I remember reading --but cannot find back the links anymore-- about this: on some ISP (including at least one big ISP in the U.S.) it is possible to have a user's GET and POST request appearing to come from different IPs. (note that this is totally programming related, and I'll give an example below) I'm not talking about having your IP adress dynamically change between two requests. I'm talking about this: IP 1: 123.45.67.89 IP 2: 101.22.33.44 The same user makes a GET, then a POST, then a GET again, then a POST again and the servers see this: - GET from IP 1 - POST from IP 2 - GET from IP 1 - POST from IP 2 So altough it's the same user, the webserver sees different IPs for the GET and the POSTs. Surely seen that HTTP is a stateless protocol this is perfectly legit right? I'd like to find back the explanation as to how/why certain ISP have their networks configured such that this may happen. I'm asking because someone asked me to implement the following IP filter and I'm pretty sure it is fundamentally broken code (breaking havoc for at least one major american ISP users). Here's a Java servlet filter that is supposed to protect against some attacks. The reasoning is that: "For any session filter checks that IP address in the request is the same that was used when session was created. So in this case session ID could not be stolen for forming fake sessions." http://www.servletsuite.com/servlets/protectsessionsflt.htm However I'm pretty sure this is inherently broken because there are ISPs where you may see GET and POST coming from different IPs. Any info on this subject is very welcome.

    Read the article

  • Powershell invoke-command with PSCredential in line

    - by jaffa
    I need to be able to run a command on another server. This script acts as a bootstrap to another script which is run on the actual server. This works great on servers on the same domain, but if I need to run this script on a remote server, I need to specify credentials. The command is kicked off from a Msbuild targets file like so: <Target Name="PreDeployment" Condition="true" BeforeTargets="MSDeployPublish"> <Exec Command="powershell.exe -ExecutionPolicy Bypass invoke-command bootstrapScript.ps1 -computername $(MyServer) -argumentlist param1, param2" /> </Target> However, I need to be able to supply the credentials by creating a new PSCredentials object with a secure password for my deployment script to run on a remote server: <Target Name="PreDeployment" Condition="true" BeforeTargets="MSDeployPublish"> <Exec Command="powershell.exe -ExecutionPolicy Bypass invoke-command bootstrapScript.ps1 -computername $(MyServer) -credential New-Object System.Management.Automation.PSCredential ('admin', (convertto-securestring $(Password) -asplaintext -force)) -argumentlist param1, param2" /> </Target> When I run the build, a dialog pops up with the username set to System.Management.Automation.PSCredential. I need to be able to create the credentials in-line on the executable target. How do I accomplish this?

    Read the article

  • Saving HttpResponse/Request to file system

    - by chrisjlong
    Here is my scenario. User fills out this large page which is dynamically created based off DB values. Those values can change. When the user fills out the page and hits submit we want to save a copy of the page as html on the server, this way if the text or wording changes, when they go back to view their posted information, it is historically accurate. So I basically need to do this protected void buttonSave_Click(object sender, EventArgs e) { //collect information into an object to save it in the db bool result = BusinessLogic.Save(myBusinessObject); if (result) //!!! Here is where I need to save this page as an html file on my servers IFS!!!! else //whatever Response.Redirect("~/SomeOtherPage.aspx"); } Any help is greatly apprciated. Also I CANNOT just request the data from the url because query string parameters are a big no no in this case. The key to pull the database info up (at its highest level) is all in session so I cant just request a url and save it. Thanks!

    Read the article

  • How do I launch background jobs w/ paramiko?

    - by sophacles
    Here is my scenario: I am trying to automate some tasks using Paramiko. The tasks need to be started in this order (using the notation (host, task)): (A, 1), (B, 2), (C, 2), (A,3), (B,3) -- essentially starting servers and clients for some testing in the correct order. Further, because in the tests networking may get mucked up, and because I need some of the output from the tests, I would like to just redirect output to a file. In similar scenarios the common response is to use 'screen -m -d' or to use 'nohup'. However with paramiko's exec_cmd, nohup doesn't actually exit. Using: bash -c -l nohup test_cmd & doesnt work either, exec_cmd still blocks to process end. In the screen case, output redirection doesn't work very well, (actually, doesnt work at all the best I can figure out). So, after all that explanation, my question is: is there an easy elegant way to detach processes and capture output in such a way as to end paramiko's exec_cmd blocking? Update The dtach command works nicely for this!

    Read the article

< Previous Page | 342 343 344 345 346 347 348 349 350 351 352 353  | Next Page >