Search Results

Search found 46494 results on 1860 pages for 'public key encryption'.

Page 180/1860 | < Previous Page | 176 177 178 179 180 181 182 183 184 185 186 187  | Next Page >

  • Problems with bash script, mysql inserts and launchd

    - by Armands
    ========= I am developing a automated system, which consists of 3 parts: mysql, bash and launchd. Bash script takes folders of work related stuff, zips, archives and puts info about them into database that is located on a local MAMP server. Everything works as expected when I run the script from terminal. But when I use Launchd to automatically run this script, it functions without errors and it does not put the values into database. I've tried to make logs of returned messages, but the logs end up being empty as the command has run the way it was supposed to. Any help would be appreciated! .plist contents <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>Label</key> <string>com.adevo.ari.zip</string> <key>ProgramArguments</key> <array> <string>/Volumes/Archive-Plus/B-ARCHIVE-PLUS/ZZ_UTILITY_FOLDER/Compress.sh</string> </array> <key>Nice</key> <integer>1</integer> <key>StartInterval</key> <integer>120</integer> <key>RunAtLoad</key> <true/> </dict> </plist> I made this .plist file just by searching the web.

    Read the article

  • SSL certificate for Oracle Application Server 11g

    - by Easter Sunshine
    I was asked to get an SSL certificate for an "Oracle Application Server 11g" which has a soon-to-expire certificate. Brushing aside the fact that 10g seems to be the newest version, I got a certificate from InCommon, as I usually do without problem (except this is the first time I supplied Oracle Application Server 11g as the software type on the CSR form). On the email containing links to download the certificate, it mentioned: Certificate Details: SSL Type : InCommon SSL Server : OTHER I forwarded the email over to the person responsible for installing it and got a reply that the server type must be Oracle Application Server for the certificate to work (the CN is the same as before). They were unable to install this certificate (no details provided to me) and mentioned they had this issue previously with Thawte when they didn't supply Oracle Application Server as the server type. I don't see any significant difference between the currently installed certificate (working) and the new one I just got signed by InCommon (not working). $ openssl x509 -in sso-current.cer -text shows, with irrelevant information ommitted. Data: Version: 3 (0x2) Signature Algorithm: sha1WithRSAEncryption Issuer: C=ZA, ST=Western Cape, L=Cape Town, O=Thawte Consulting cc, OU=Certification Services Division, CN=Thawte Premium Server CA/[email protected] Validity Not Before: Oct 1 00:00:00 2009 GMT Not After : Nov 28 23:59:59 2012 GMT Subject Public Key Info: Public Key Algorithm: rsaEncryption Public-Key: (2048 bit) Modulus: Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Basic Constraints: critical CA:FALSE X509v3 CRL Distribution Points: Full Name: URI:http://crl.thawte.com/ThawteServerPremiumCA.crl X509v3 Extended Key Usage: TLS Web Server Authentication, TLS Web Client Authentication Authority Information Access: OCSP - URI:http://ocsp.thawte.com Signature Algorithm: sha1WithRSAEncryption and $ openssl x509 -in sso-new.cer -text shows Data: Version: 3 (0x2) Signature Algorithm: sha1WithRSAEncryption Issuer: C=US, O=Internet2, OU=InCommon, CN=InCommon Server CA Validity Not Before: Nov 8 00:00:00 2012 GMT Not After : Nov 8 23:59:59 2014 GMT Subject Public Key Info: Public Key Algorithm: rsaEncryption Public-Key: (2048 bit) Modulus: Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Authority Key Identifier: keyid:48:4F:5A:FA:2F:4A:9A:5E:E0:50:F3:6B:7B:55:A5:DE:F5:BE:34:5D X509v3 Subject Key Identifier: 18:8D:F6:F5:87:4D:C4:08:7B:2B:3F:02:A1:C7:AC:6D:A7:90:93:02 X509v3 Key Usage: critical Digital Signature, Key Encipherment X509v3 Basic Constraints: critical CA:FALSE X509v3 Extended Key Usage: TLS Web Server Authentication, TLS Web Client Authentication X509v3 Certificate Policies: Policy: 1.3.6.1.4.1.5923.1.4.3.1.1 CPS: https://www.incommon.org/cert/repository/cps_ssl.pdf X509v3 CRL Distribution Points: Full Name: URI:http://crl.incommon.org/InCommonServerCA.crl Authority Information Access: CA Issuers - URI:http://cert.incommon.org/InCommonServerCA.crt OCSP - URI:http://ocsp.incommon.org Nothing jumps out at me as the reason one would not work so I don't have a specific request for the signer for what to do differently when re-signing.

    Read the article

  • Cloudfront - How to invalidate objects in a distribution that was transformed from secured to public?

    - by Gil
    The setting I have an Amazon Cloudfront distribution that was originally set as secured. Objects in this distribution required a URL signing. For example, a valid URL used to be of the following format: https://d1stsppuecoabc.cloudfront.net/images/TheImage.jpg?Expires=1413119282&Signature=NLLRTVVmzyTEzhm-ugpRymi~nM2v97vxoZV5K9sCd4d7~PhgWINoTUVBElkWehIWqLMIAq0S2HWU9ak5XIwNN9B57mwWlsuOleB~XBN1A-5kzwLr7pSM5UzGn4zn6GRiH-qb2zEoE2Fz9MnD9Zc5nMoh2XXwawMvWG7EYInK1m~X9LXfDvNaOO5iY7xY4HyIS-Q~xYHWUnt0TgcHJ8cE9xrSiwP1qX3B8lEUtMkvVbyLw__&Key-Pair-Id=APKAI7F5R77FFNFWGABC The distribution points to an S3 bucket that also used to be secured (it only allowed access through the cloudfront). What happened At some point, the URL singing expired and would return a 403. Since we no longer need to keep the same security level, I recently changed the setting of the cloudfront distribution and of the S3 bucket it is pointing to, both to be public. I then tried to invalidate objects in this distribution. Invalidation did not throw any errors, however the invalidation did not seem to succeed. Requests to the same cloudfront URL (with or without the query string) still return 403. The response header looks like: HTTP/1.1 403 Forbidden Server: CloudFront Date: Mon, 18 Aug 2014 15:16:08 GMT Content-Type: text/xml Content-Length: 110 Connection: keep-alive X-Cache: Error from cloudfront Via: 1.1 3abf650c7bf73e47515000bddf3f04a0.cloudfront.net (CloudFront) X-Amz-Cf-Id: j1CszSXz0DO-IxFvHWyqkDSdO462LwkfLY0muRDrULU7zT_W4HuZ2B== Things I tried I tried to set another cloudfront distribution that points to the same S3 as origin server. Requests to the same object in the new distribution were successful. The question Did anyone encounter the same situation where a cloudfront URL that returns 403 cannot be invalidated? Is there any reason why wouldn't the object get invalidated? Thanks for your help!

    Read the article

  • Roles / Profiles / Perspectives in NetBeans IDE 7.1

    - by Geertjan
    With a check out of main-silver from yesterday, I'm able to use the brand new "role" attribute in @TopComponent.Registration, as you can see below, in the bit in bold: @ConvertAsProperties(dtd = "-//org.role.demo.ui//Admin//EN", autostore = false) @TopComponent.Description(preferredID = "AdminTopComponent", //iconBase="SET/PATH/TO/ICON/HERE", persistenceType = TopComponent.PERSISTENCE_ALWAYS) @TopComponent.Registration(mode = "editor", openAtStartup = true, role="admin") public final class AdminTopComponent extends TopComponent { And here's a window for general users of the application, with the "role" attribute set to "user": @ConvertAsProperties(dtd = "-//org.role.demo.ui//User//EN", autostore = false) @TopComponent.Description(preferredID = "UserTopComponent", //iconBase="SET/PATH/TO/ICON/HERE", persistenceType = TopComponent.PERSISTENCE_ALWAYS) @TopComponent.Registration(mode = "explorer", openAtStartup = true, role="user") public final class UserTopComponent extends TopComponent { So, I have two windows. One is assigned to the "admin" role, the other to the "user" role. In the "ModuleInstall" class, I add a "WindowSystemListener" and set "user" as the application's role: public class Installer extends ModuleInstall implements WindowSystemListener { @Override public void restored() { WindowManager.getDefault().addWindowSystemListener(this); } @Override public void beforeLoad(WindowSystemEvent event) { WindowManager.getDefault().setRole("user"); WindowManager.getDefault().removeWindowSystemListener(this); } @Override public void afterLoad(WindowSystemEvent event) { } @Override public void beforeSave(WindowSystemEvent event) { } @Override public void afterSave(WindowSystemEvent event) { } } So, when the application starts, the "UserTopComponent" is shown, not the "AdminTopComponent". Next, I have two Actions, for switching between the two roles, as shown below: @ActionID(category = "Window", id = "org.role.demo.ui.SwitchToAdminAction") @ActionRegistration(displayName = "#CTL_SwitchToAdminAction") @ActionReferences({ @ActionReference(path = "Menu/Window", position = 250) }) @Messages("CTL_SwitchToAdminAction=Switch To Admin") public final class SwitchToAdminAction extends AbstractAction { @Override public void actionPerformed(ActionEvent e) { WindowManager.getDefault().setRole("admin"); } @Override public boolean isEnabled() { return !WindowManager.getDefault().getRole().equals("admin"); } } @ActionID(category = "Window", id = "org.role.demo.ui.SwitchToUserAction") @ActionRegistration(displayName = "#CTL_SwitchToUserAction") @ActionReferences({ @ActionReference(path = "Menu/Window", position = 250) }) @Messages("CTL_SwitchToUserAction=Switch To User") public final class SwitchToUserAction extends AbstractAction { @Override public void actionPerformed(ActionEvent e) { WindowManager.getDefault().setRole("user"); } @Override public boolean isEnabled() { return !WindowManager.getDefault().getRole().equals("user"); } } When I select one of the above actions, the role changes, and the other window is shown. I could, of course, add a Login dialog to the "SwitchToAdminAction", so that authentication is required in order to switch to the "admin" role. Now, let's say I am now in the "user" role. So, the "UserTopComponent" shown above is now opened. I decide to also open another window, the Properties window, as below... ...and, when I am in the "admin" role, when the "AdminTopComponent" is open, I decide to also open the Output window, as below... Now, when I switch from one role to the other, the additional window/s I opened will also be opened, together with the explicit members of the currently selected role. And, the main window position and size are also persisted across roles. When I look in the "build" folder of my project in development, I see two different Windows2Local folders, one per role, automatically created by the fact that there is something to be persisted for a particular role, e.g., when a switch to a different role is done: And, with that, we now clearly have roles/profiles/perspectives in NetBeans Platform applications from NetBeans Platform 7.1 onwards.

    Read the article

  • Why does key-based ssh fail even after setting up the authorized_keys file on the remote host?

    - by Brad Grissom
    These details don't matter but I am on a Ubuntu 12.04 machine and I want to ssh into my RaspberryPi without a password. I followed the standard procedure for setting up ssh without a password: local $ ssh-keygen -t rsa (hit enter for defaults to the questions) local $ scp ~/.ssh/id_rsa.pub matt@raspihost:~/.ssh/authorized_keys I logged onto the raspihost and checked all my permissions on ~/.ssh/ and on the authorized_keys file itself. It was still not working!

    Read the article

  • Design for an interface implementation that provides additional functionality

    - by Limbo Exile
    There is a design problem that I came upon while implementing an interface: Let's say there is a Device interface that promises to provide functionalities PerformA() and GetB(). This interface will be implemented for multiple models of a device. What happens if one model has an additional functionality CheckC() which doesn't have equivalents in other implementations? I came up with different solutions, none of which seems to comply with interface design guidelines: To add CheckC() method to the interface and leave one of its implementations empty: interface ISomeDevice { void PerformA(); int GetB(); bool CheckC(); } class DeviceModel1 : ISomeDevice { public void PerformA() { // do stuff } public int GetB() { return 1; } public bool CheckC() { bool res; // assign res a value based on some validation return res; } } class DeviceModel2 : ISomeDevice { public void PerformA() { // do stuff } public int GetB() { return 1; } public bool CheckC() { return true; // without checking anything } } This solution seems incorrect as a class implements an interface without truly implementing all the demanded methods. To leave out CheckC() method from the interface and to use explicit cast in order to call it: interface ISomeDevice { void PerformA(); int GetB(); } class DeviceModel1 : ISomeDevice { public void PerformA() { // do stuff } public int GetB() { return 1; } public bool CheckC() { bool res; // assign res a value based on some validation return res; } } class DeviceModel2 : ISomeDevice { public void PerformA() { // do stuff } public int GetB() { return 1; } } class DeviceManager { private ISomeDevice myDevice; public void ManageDevice(bool newDeviceModel) { myDevice = (newDeviceModel) ? new DeviceModel1() : new DeviceModel2(); myDevice.PerformA(); int b = myDevice.GetB(); if (newDeviceModel) { DeviceModel1 newDevice = myDevice as DeviceModel1; bool c = newDevice.CheckC(); } } } This solution seems to make the interface inconsistent. For the device that supports CheckC(): to add the logic of CheckC() into the logic of another method that is present in the interface. This solution is not always possible. So, what is the correct design to be used in such cases? Maybe creating an interface should be abandoned altogether in favor of another design?

    Read the article

  • How do I use key combinations on an axis on a joystick in xorg?

    - by valadil
    I'm using xserver-xorg-input-joystick on Debian Stable so I can use a joystick in place of the mouse. I have mouse movement working correctly, but got stuck trying to add functions for some other keys. These work: #Left stick #Pointer Option "MapAxis1" "mode=relative axis=1.5x" Option "MapAxis2" "mode=relative axis=1.5y" #Right stick #Arrow keys Option "MapAxis4" "mode=relative keylow=Left keyhigh=Right" Option "MapAxis5" "mode=relative keylow=Up keyhigh=Down" But when I try to make key combos (so I can navigate windows and screens in xmonad) I have no luck. #dpad #xmonad focus #up/down toggle window. l/r choose screen. Option "MapAxis8" "mode=relative keylow=Super_L,k keyhigh=Super_L,j" Option "MapAxis7" "mode=relative keylow=Super_L,w keyhigh=Super_L,e" I've also tried Super_R, plain old Super, Meta, and mod4mask, and anything else I can think of. These buttons print the letter, but don't appear to hold down the modifying key. The exception to that is shift. If I specify Shift_L or Shift_R, I get a capital letter. xev indicates that modifier keys are being pressed. If I lower Axis8, I get press Super_L, press k, release k, release Super_L. That looks like it should be working. Maybe this is an xmonad problem and not a joystick driver one? I'm also having trouble with getting an axis to use other XF86 keys: # triggers # song selection Option "MapAxis3" "mode=relative keylow=none keyhigh=XF86AudioForward" Option "MapAxis6" "mode=relative keylow=none keyhigh=XF86AudioBack" That does nothing. Any idea why? If it turns out that this isn't something I can do on an axis, but would work with a button, is there a way to treat my joysticks as buttons? Also, if anyone has suggestions for the other 5 buttons I'll have left after mouse buttons are bound, I'm listening.

    Read the article

  • Code refactoring with Visual Studio 2010 Part-4

    - by Jalpesh P. Vadgama
    I have been writing few post with code refactoring features in Visual Studio 2010. This post also will be part of series and this post will be last of the series. In this post I am going explain two features 1) Encapsulate Field and 2) Extract Interface. Let’s explore both features in details. Encapsulate Field: This is a nice code refactoring feature provides by Visual Studio 2010. With help of this feature we can create properties from the existing private field of the class. Let’s take a simple example of Customer Class. In that I there are two private field called firstName and lastName. Below is the code for the class. public class Customer { private string firstName; private string lastName; public string Address { get; set; } public string City { get; set; } } Now lets encapsulate first field firstName with Encapsulate feature. So first select that field and goto refactor menu in Visual Studio 2010 and click on Encapsulate Field. Once you click that a dialog box will appear like following. Now once you click OK a preview dialog box will open as we have selected preview reference changes. I think its a good options to check that option to preview code that is being changed by IDE itself. Dialog will look like following. Once you click apply it create a new property called FirstName. Same way I have done for the lastName and now my customer class code look like following. public class Customer { private string firstName; public string FirstName { get { return firstName; } set { firstName = value; } } private string lastName; public string LastName { get { return lastName; } set { lastName = value; } } public string Address { get; set; } public string City { get; set; } } So you can see that its very easy to create properties with existing fields and you don’t have to change anything there in code it will change all the stuff itself. Extract Interface: When you are writing software prototype and You don’t know the future implementation of that then its a good practice to use interface there. I am going to explain here that How we can extract interface from the existing code without writing a single line of code with the help of code refactoring feature of Visual Studio 2010. For that I have create a Simple Repository class called CustomerRepository with three methods like following. public class CustomerRespository { public void Add() { // Some code to add customer } public void Update() { //some code to update customer } public void Delete() { //some code delete customer } } In above class there are three method Add,Update and Delete where we are going to implement some code for each one. Now I want to create a interface which I can use for my other entities in project. So let’s create a interface from the above class with the help of Visual Studio 2010. So first select class and goto refactor menu and click Extract Interface. It will open up dialog box like following. Here I have selected all the method for interface and Once I click OK then it will create a new file called ICustomerRespository where it has created a interface. Just like following. Here is a code for that interface. using System; namespace CodeRefractoring { interface ICustomerRespository { void Add(); void Delete(); void Update(); } } Now let's see the code for the our class. It will also changed like following to implement the interface. public class CustomerRespository : ICustomerRespository { public void Add() { // Some code to add customer } public void Update() { //some code to update customer } public void Delete() { //some code delete customer } } Isn't that great we have created a interface and implemented it without writing a single line of code. Hope you liked it. Stay tuned for more.. Till that Happy Programming.

    Read the article

  • Welcome Windows Embedded Compact!

    - by Luca Calligaris
    Windows Embedded Compact 7 Public Community Technology Preview (Public CTP) is finally available for downloading: You need a Windows Live ID to log in and download the Public CTP Go to the Connection Directory, find Windows Embedded Compact 7 Public CTP and click on (apply) Download the Public CTP from the Compact 7 Public CTP program page In the next blog entries I'll try to address some of the new features of the new version of my favourite OS.

    Read the article

  • Making a Camera look at a target Vector

    - by Peteyslatts
    I have a camera that works as long as its stationary. Now I'm trying to create a child class of that camera class that will look at its target. The new addition to the class is a method called SetTarget(). The method takes in a Vector3 target. The camera wont move but I need it to rotate to look at the target. If I just set the target, and then call CreateLookAt() (which takes in position, target, and up), when the object gets far enough away and underneath the camera, it suddenly flips right side up. So I need to transform the up vector, which currently always stays at Vector3.Up. I feel like this has something to do with taking the angle between the old direction vector and the new one (which I know can be expressed by target - position). I feel like this is all really vague, so here's the code for my base camera class: public class BasicCamera : Microsoft.Xna.Framework.GameComponent { public Matrix view { get; protected set; } public Matrix projection { get; protected set; } public Vector3 position { get; protected set; } public Vector3 direction { get; protected set; } public Vector3 up { get; protected set; } public Vector3 side { get { return Vector3.Cross(up, direction); } protected set { } } public BasicCamera(Game game, Vector3 position, Vector3 target, Vector3 up) : base(game) { this.position = position; this.direction = target - position; this.up = up; CreateLookAt(); projection = Matrix.CreatePerspectiveFieldOfView( MathHelper.PiOver4, (float)Game.Window.ClientBounds.Width / (float)Game.Window.ClientBounds.Height, 1, 500); } public override void Update(GameTime gameTime) { // TODO: Add your update code here CreateLookAt(); base.Update(gameTime); } } And this is the code for the class that extends the above class to look at its target. class TargetedCamera : BasicCamera { public Vector3 target { get; protected set; } public TargetedCamera(Game game, Vector3 position, Vector3 target, Vector3 up) : base(game, position, target, up) { this.target = target; } public void SetTarget(Vector3 target) { direction = target - position; } protected override void CreateLookAt() { view = Matrix.CreateLookAt(position, target, up); } }

    Read the article

  • why do Vagrant docs suggest using public IP address 33.33.33.10 for local VMs?

    - by Gert
    I'm following a tutorial to set up vagrant (a tool to build and configure portable virtual machine images), and it seems that vagrant documentation suggests using IPv4 address 33.33.33.10 to configure a new box. That is a publicly routed IP adress, so I'm a bit confused why using this address is suggested. Since I don't own this network, I should not use an address from the 33.33.33.10/8 range. Am I correct in thinking that I should only use either a public address from a network I own, or an address from one of the private ranges as defined in RFC 1918? If so, why does Vagrant documentation suggest otherwise?

    Read the article

  • Data Transformation Pipeline

    - by davenewza
    I have create some kind of data pipeline to transform coordinate data into more useful information. Here is the shell of pipeline: public class PositionPipeline { protected List<IPipelineComponent> components; public PositionPipeline() { components = new List<IPipelineComponent>(); } public PositionPipelineEntity Process(Position position) { foreach (var component in components) { position = component.Execute(position); } return position; } public PositionPipeline RegisterComponent(IPipelineComponent component) { components.Add(component); return this; } } Every IPipelineComponent accepts and returns the same type - a PositionPipelineEntity. Code: public interface IPipelineComponent { PositionPipelineEntity Execute(PositionPipelineEntity position); } The PositionPipelineEntity needs to have many properties, many which are unused in certain components and required in others. Some properties will also have become redundant at the end of the pipeline. For example, these components could be executed: TransformCoordinatesComponent: Parse the raw coordinate data into a Coordinate type. DetermineCountryComponent: Determine and stores country code. DetermineOnRoadComponent: Determine and store whether coordinate is on a road. Code: pipeline .RegisterComponent(new TransformCoordinatesComponent()) .RegisterComponent(new DetermineCountryComponent()) .RegisterComponent(new DetermineOnRoadComponent()); pipeline.Process(positionPipelineEntity); The PositionPipelineEntity type: public class PositionPipelineEntity { // Only relevant to the TransformCoordinatesComponent public decimal RawCoordinateLatitude { get; set; } // Only relevant to the TransformCoordinatesComponent public decimal RawCoordinateLongitude { get; set; } // Required by all components after TransformCoordinatesComponent public Coordinate CoordinateLatitude { get; set; } // Required by all components after TransformCoordinatesComponent public Coordinate CoordinateLongitude { get; set; } // Set in DetermineCountryComponent, not required anywhere. // Requires CoordinateLatitude and CoordinateLongitude (TransformCoordinatesComponent) public string CountryCode { get; set; } // Set in DetermineOnRoadComponent, not required anywhere. // Requires CoordinateLatitude and CoordinateLongitude (TransformCoordinatesComponent) public bool OnRoad { get; set; } } Problems: I'm very concerned about the dependency that a component has on properties. The way to solve this would be to create specific types for each component. The problem then is that I cannot chain them together like this. The other problem is the order of components in the pipeline matters. There is some dependency. The current structure does not provide any static or runtime checking for such a thing. Any feedback would be appreciated.

    Read the article

  • Managing common code on Windows 7 (.NET) and Windows 8 (WinRT)

    - by ryanabr
    Recent announcements regarding Windows Phone 8 and the fact that it will have the WinRT behind it might make some of this less painful but I  discovered the "XmlDocument" object is in a new location in WinRT and is almost the same as it's brother in .NET System.Xml.XmlDocument (.NET) Windows.Data.Xml.Dom.XmlDocument (WinRT) The problem I am trying to solve is how to work with both types in the code that performs the same task on both Windows Phone 7 and Windows 8 platforms. The first thing I did was define my own XmlNode and XmlNodeList classes that wrap the actual Microsoft objects so that by using the "#if" compiler directive either work with the WinRT version of the type, or the .NET version from the calling code easily. public class XmlNode     { #if WIN8         public Windows.Data.Xml.Dom.IXmlNode Node { get; set; }         public XmlNode(Windows.Data.Xml.Dom.IXmlNode xmlNode)         {             Node = xmlNode;         } #endif #if !WIN8 public System.Xml.XmlNode Node { get; set ; } public XmlNode(System.Xml.XmlNode xmlNode)         {             Node = xmlNode;         } #endif     } public class XmlNodeList     { #if WIN8         public Windows.Data.Xml.Dom.XmlNodeList List { get; set; }         public int Count {get {return (int)List.Count;}}         public XmlNodeList(Windows.Data.Xml.Dom.XmlNodeList list)         {             List = list;         } #endif #if !WIN8 public System.Xml.XmlNodeList List { get; set ; } public int Count { get { return List.Count;}} public XmlNodeList(System.Xml.XmlNodeList list)         {             List = list;        } #endif     } From there I can then use my XmlNode and XmlNodeList in the calling code with out having to clutter the code with all of the additional #if switches. The challenge after this was the code that worked directly with the XMLDocument object needed to be seperate on both platforms since the method for populating the XmlDocument object is completly different on both platforms. To solve this issue. I made partial classes, one partial class for .NET and one for WinRT. Both projects have Links to the Partial Class that contains the code that is the same for the majority of the class, and the partial class contains the code that is unique to the version of the XmlDocument. The files with the little arrow in the lower left corner denotes 'linked files' and are shared in multiple projects but only exist in one location in source control. You can see that the _Win7 partial class is included directly in the project since it include code that is only for the .NET platform, where as it's cousin the _Win8 (not pictured above) has all of the code specific to the _Win8 platform. In the _Win7 partial class is this code: public partial class WUndergroundViewModel     { public static WUndergroundData GetWeatherData( double lat, double lng)         { WUndergroundData data = new WUndergroundData();             System.Net. WebClient c = new System.Net. WebClient(); string req = "http://api.wunderground.com/api/xxx/yesterday/conditions/forecast/q/[LAT],[LNG].xml" ;             req = req.Replace( "[LAT]" , lat.ToString());             req = req.Replace( "[LNG]" , lng.ToString()); XmlDocument doc = new XmlDocument();             doc.Load(c.OpenRead(req)); foreach (XmlNode item in doc.SelectNodes("/response/features/feature" ))             { switch (item.Node.InnerText)                 { case "yesterday" :                         ParseForecast( new FishingControls.XmlNodeList (doc.SelectNodes( "/response/forecast/txt_forecast/forecastdays/forecastday" )), new FishingControls.XmlNodeList (doc.SelectNodes( "/response/forecast/simpleforecast/forecastdays/forecastday" )), data); break ; case "conditions" :                         ParseCurrent( new FishingControls.XmlNode (doc.SelectSingleNode("/response/current_observation" )), data); break ; case "forecast" :                         ParseYesterday( new FishingControls.XmlNodeList (doc.SelectNodes( "/response/history/observations/observation" )),data); break ;                 }             } return data;         }     } in _win8 partial class is this code: public partial class WUndergroundViewModel     { public async static Task< WUndergroundData > GetWeatherData(double lat, double lng)         { WUndergroundData data = new WUndergroundData (); HttpClient c = new HttpClient (); string req = "http://api.wunderground.com/api/xxxx/yesterday/conditions/forecast/q/[LAT],[LNG].xml" ;             req = req.Replace( "[LAT]" , lat.ToString());             req = req.Replace( "[LNG]" , lng.ToString()); HttpResponseMessage msg = await c.GetAsync(req); string stream = await msg.Content.ReadAsStringAsync(); XmlDocument doc = new XmlDocument ();             doc.LoadXml(stream, null); foreach ( IXmlNode item in doc.SelectNodes("/response/features/feature" ))             { switch (item.InnerText)                 { case "yesterday" :                         ParseForecast( new FishingControls.XmlNodeList (doc.SelectNodes( "/response/forecast/txt_forecast/forecastdays/forecastday" )), new FishingControls.XmlNodeList (doc.SelectNodes( "/response/forecast/simpleforecast/forecastdays/forecastday" )), data); break; case "conditions" :                         ParseCurrent( new FishingControls.XmlNode (doc.SelectSingleNode("/response/current_observation" )), data); break; case "forecast" :                         ParseYesterday( new FishingControls.XmlNodeList (doc.SelectNodes( "/response/history/observations/observation")), data); break;                 }             } return data;         }     } Summary: This method allows me to have common 'business' code for both platforms that is pretty clean, and I manage the technology differences separately. Thank you tostringtheory for your suggestion, I was considering that approach.

    Read the article

  • ubuntu 12.04 desktop Error: unknown command 'gfxmode'. Pressing any key continues

    - by Andy
    Premise: linux noobie here, I have the same issue as OP: fresh 12.04 desktop, changed grub with grub customizer, now I get a: unknown command 'gfxmode' press ...etc was asked to "re-post" this question and link to this thread which I refer to above. I have tried what Tarek said, and nothing seems to work, I find two lines with gfxmode: function gfxmode { gfxmode \$linux_gfx_mode Note: not sure if it matter but in the error the two single quotes before gfxmode are not the same, the first is a slanted quote mark, the second (after gfxmode) is a straight one. I commented out the whole line, I tried to add 'set' before gfxmode, neither did any difference. I found another place that said to remove the line from another file 40_custom, but I checked and those files do not contain anything relating to the line we are looking for: gfxmode $linux_gfx_mode Not sure what I am missing, but the file linux.save has recently appeared when searching for the line. Not sure if its just a temp file of some kind. In any case I cannot seem to get it, what am I missing? Thanks! P.S. sorry for any messups in form :)

    Read the article

  • What USB key would you recommend using for running a Windows 7 VM off of?

    - by Darryl Hein
    Because I can't find a good PHP editor for OSX, I develop in Windows with PhpEd. At the moment, my development time is split between a desktop and a laptop. To partially solve the problem of having 2 different environments, I have installed a virtual machine (through Virtual Box) and put the hard drive file on an external hard drive. At the moment, I've been connecting it through Firewire 800. I have 2 problems with this setup: (1) The hard drive is fairly large so to carry the laptop and hard drive I pretty much require a backpack. (2) The hard drive requires quite a bit of power and therefore reduces the battery life (by about 40%). My thought is to move the VM hard drive onto a USB key. I realize it will be slower, but as I'm just using it for PHP development, there isn't a lot of disk activity in the VM. The only really intense time is boot up, otherwise, it just about sits idle. Do anyone have any suggestions on a USB key to use for the VM? It would need to a minimum of 32GB.

    Read the article

  • How do I separate codes with classes?

    - by Trycon
    I have this main class: package javagame; import org.newdawn.slick.GameContainer; import org.newdawn.slick.Graphics; import org.newdawn.slick.SlickException; import org.newdawn.slick.state.BasicGameState; import org.newdawn.slick.state.StateBasedGame; public class tests extends BasicGameState{ public boolean render=false; tests1 test = new tests1(); public tests(int test) { // TODO Auto-generated constructor stub } @Override public void init(GameContainer arg0, StateBasedGame arg1) throws SlickException { // TODO Auto-generated method stub } @Override public void render(GameContainer arg0, StateBasedGame arg1, Graphics g) throws SlickException { // TODO Auto-generated method stub if(render==true) { g.drawString("Hello",100,100); } } @Override public void update(GameContainer gc, StateBasedGame s, int delta) throws SlickException { // TODO Auto-generated method stub test.render=render; test.update(gc, s, delta); } @Override public int getID() { // TODO Auto-generated method stub return 1000; } } and its sub-class: package javagame; import org.newdawn.slick.GameContainer; import org.newdawn.slick.Input; import org.newdawn.slick.state.StateBasedGame; public class tests1 { public boolean render; public void update(GameContainer gc, StateBasedGame s, int delta) { Input input = gc.getInput(); if(input.isKeyPressed(Input.KEY_X)) { render=true; } } } I was finding a way to prevent many codes in one class. I'm new to java. When I try running my game, then when I press X, it does not work. How am I suppose to fix that?

    Read the article

  • How to ssh to my dorm computer with shared public IP and no admin rights over the router?

    - by Aamir
    First of all, I am not a Linux or ssh newbie. I have searched for this problem on many forums extensively but nobody seemed to have discussed this. Please help me! I live in a student dorm (off-campus) and all students of the dorm share the same WAN IP (Internet or public IP), which is fortunately static. I am not an admin and have no control over the router that assigns private IP's to all of the students, so I can't really forward port 22 to my computer :( Is it still possible to establish an ssh connection to my dorm computer from a computer on campus?

    Read the article

  • how to avoid flickering in awt [on hold]

    - by Ishanth
    import java.awt.event.*; import java.awt.*; class circle1 extends Frame implements KeyListener { public int a=300; public int b=70; public int pacx=360; public int pacy=270; public circle1() { setTitle("circle"); addKeyListener(this); repaint(); } public void paint(Graphics g) { g.fillArc (a, b, 60, 60,pacx,pacy); } public void keyPressed(KeyEvent e) { int key=e.getKeyCode(); System.out.println(key); if(key==38) { b=b-5; //move pacman up pacx=135;pacy=270; //packman mouth upside if(b==75&&a>=20||b==75&&a<=945) { b=b+5; } else { repaint(); } } else if(key==40) { b=b+5; //move pacman downside pacx=315; pacy=270; //packman mouth down if(b==645&&a>=20||b==645&&a<=940) { b=b-5; } else{ repaint(); } } else if(key==37) { a=a-5; //move pacman leftside pacx=227; pacy=270; //packman mouth left if(a==15&&b>=75||a==15&&b<=640) { a=a+5; } else { repaint(); } } else if(key==39) { a=a+5; //move pacman rightside pacx=42;pacy=270; //packman mouth right if(a==945&&a>=80||a==945&&b<=640) { a=a-5; } else { repaint(); } } } public void keyReleased(KeyEvent e){} public void keyTyped(KeyEvent e){} public static void main(String args[]) { circle1 c=new circle1(); c.setVisible(true); c.setSize(400,400); } }

    Read the article

  • once VPNed into pfSense, unable to hit the public URLs of my websites - they are routed to the pfSense box

    - by Sean
    I have a pfSense box setup as the firewall/router/VPN appliance at my colo. Once I VPN into the colo (either pptp or openvpn, pptp preferred due to multiple clients and ease of configuration), I am able to hit all my servers by their private 10.10.10.x ip and am able to browse the public internet without issue. When I try and hit the URL of a domain hosted by one of my servers, I am prompted for credentials. If I login using the pfSense credentials, I'm connected to pfSense as if I'd used it's internal IP. If I hack my hosts file to point url - server private IP it works fine, but this is obviously not a good solution. To recap: not connected to VPN - www.myurl.com works connected to VPN - www.myurl.com never makes it to the correct server, but is sent only to the pfSense box I'm sure it's something small that I've missed in the pfSense config.

    Read the article

  • In practice, what are the key differences between Heroku and webfaction? [closed]

    - by jdotjdot
    I've been building and hosting webapps, mainly in Django and Flask, for some time now. Mainly, I've been hosting them on Heroku, because of the free tier and the ease of git-enabled application updating. I have seen that a lot of Django users prefer Webfaction. I looked through their offerings, and they seem to me like a standard web hosting service. Questions: Why might be webfaction considered a good hosting service for Django apps? If Heroku is generally called a "Platform-as-a-Service," what does that make Webfaction? Does it have any important similiarities/distinctions from Heroku that I might somehow be missing?

    Read the article

  • How can I log key presses in Game Maker?

    - by skeletalmonkey
    I'm trying to create a log of a players actions as they play a game of Spelunky. The easiest I've found to do this is to log what keys are pressed at each frame. What I don't know how to do is how to integrate this with the Game Maker source code of Spelunky. Is there a specific way to create a script that is checked every frame/tick (don't know the right term) and a command to find what buttons are pressed?

    Read the article

  • Ubuntu 9.10 RSA authentication: ssh fails, filezilla runs fine

    - by MariusPontmercy
    This is quite a mistery for me. I usually use passwordless RSA authentication to login into my remote *nix servers with ssh and sftp. Never had any problem until now. I cannot connect to an Ubuntu 9.10 machine: user@myclient$ ssh -i .ssh/Ganymede_key [email protected] [...] debug1: Host 'ganymede.server.com' is known and matches the RSA host key. debug1: Found key in /home/user/.ssh/known_hosts:14 debug2: bits set: 494/1024 debug1: ssh_rsa_verify: signature correct debug2: kex_derive_keys debug2: set_newkeys: mode 1 debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug2: set_newkeys: mode 0 debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: .ssh/Ganymede_key (0xb96a0ef8) debug2: key: .ssh/Ganymede_key ((nil)) debug1: Authentications that can continue: publickey,password,keyboard-interactive debug1: Next authentication method: publickey debug1: Offering public key: .ssh/Ganymede_key debug2: we sent a publickey packet, wait for reply debug1: Authentications that can continue: publickey,password,keyboard-interactive debug1: Trying private key: .ssh/Ganymede_key debug1: read PEM private key done: type RSA debug2: we sent a publickey packet, wait for reply debug1: Authentications that can continue: publickey,password,keyboard-interactive debug2: we did not send a packet, disable method debug1: Next authentication method: keyboard-interactive debug2: userauth_kbdint debug2: we sent a keyboard-interactive packet, wait for reply debug2: input_userauth_info_req debug2: input_userauth_info_req: num_prompts 1 Then it falls back to password authentication. If I disable password authentication on the remote machine my connection attempt just fails with a "Permission denied (publickey)." state. Same thing for sftp from command line. The "funny" thing is that the exact same RSA key works like a charm with a Filezilla sftp session instead: 12:08:00 Trace: Offered public key from "/home/user/.filezilla/keys/Ganymede_key" 12:08:00 Trace: Offer of public key accepted, trying to authenticate using it. 12:08:01 Trace: Access granted 12:08:01 Trace: Opened channel for session 12:08:01 Trace: Started a shell/command 12:08:01 Status: Connected to ganymede.server.com 12:08:02 Trace: CSftpControlSocket::ConnectParseResponse() 12:08:02 Trace: CSftpControlSocket::ResetOperation(0) 12:08:02 Trace: CControlSocket::ResetOperation(0) 12:08:02 Status: Retrieving directory listing... 12:08:02 Trace: CSftpControlSocket::SendNextCommand() 12:08:02 Trace: CSftpControlSocket::ChangeDirSend() 12:08:02 Command: pwd 12:08:02 Response: Current directory is: "/root" 12:08:02 Trace: CSftpControlSocket::ResetOperation(0) 12:08:02 Trace: CControlSocket::ResetOperation(0) 12:08:02 Trace: CSftpControlSocket::ParseSubcommandResult(0) 12:08:02 Trace: CSftpControlSocket::ListSubcommandResult() 12:08:02 Trace: CSftpControlSocket::ResetOperation(0) 12:08:02 Trace: CControlSocket::ResetOperation(0) 12:08:02 Status: Directory listing successful Any thoughts? M

    Read the article

  • How to create a new public AMI for windows?

    - by user67081
    I am trying to make a windows 2008 AMI that is a nice clean 64bit starter pack (IIS, SQL express, ASP.NET MVC, etc...) I would like to make it a public AMI when its done. There in lies the problem. I can make an AMI from my image no problem. But I can't seen to get new instances to generate their own passwords.. The results are that I have a new instance that works great with my password. So what is the process of making my EBS backed Instances convert into an AMI that will auto-generate its password and do all the other setup steps that amazon wants to go thru when a new instance starts up? Thanks in advance.

    Read the article

  • Auto Launching PHP-FPM

    - by Seth
    My plist file <?xml version='1.0' encoding='UTF-8'?> <!DOCTYPE plist PUBLIC "-//Apple Computer//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd" > <plist version='1.0'> <dict> <key>Label</key><string>org.macports.php-fpm</string> <key>ProgramArguments</key> <array> <string>/opt/local/bin/daemondo</string> <string>--label=php-fpm</string> <string>--start-cmd</string> <string>/opt/local/sbin/php-fpm</string> <string>;</string> <string>--pid=fileauto</string> <string>--pidfile</string> <string>/opt/local/var/run/php-fpm/php-fpm.pid</string> </array> <key>Debug</key><false/> <key>Disabled</key><true/> <key>OnDemand</key><false/> </dict> </plist> After rebooting, it's not loading up automatically. I still have to manually start php-fpm. I have tried unloading and adding RunAtLoad etc. with no luck and tried both these launchctl commands. sudo launchctl load -F /Library/LaunchDaemons/org.macports.php-fpm.plist sudo launchctl load -w /Library/LaunchDaemons/org.macports.php-fpm.plist

    Read the article

  • Are there any viable DNS or LDAP alternatives for distributed key/value storage and retrieval?

    - by makerofthings7
    I'm working on a software app that needs distributed decentralized name resolution, and isn't bound to TCP/IP. Or more precisely, I need to store a "key" and look up it's value, and the key may be a string, a number, or any other realistic data type. Examples: With a phone number, look up a name. (or with an area code, redirect to the server that handles that exchange) With an IP Address get a DNS name, or a Whois contact (string value) With a string, get an IP, ( like a DNS TXT or SRV record). I'm thinking out of the box here and looking for any software that allows for this. (more info below) Are there any secure, scalable DNS alternatives that have gained notoriety? I could ask on StackOverflow, but think the infrastructure groups would have better insight on this. Edit More info: I'm looking at "Namecoin" the DNS version of Bitcoin, and since that project is faltering, I'm looking at alternative ways to store name-value pairs, with an optional qualifier. I think a name value pair is of global interest is useful, but on a limited scale. Namecoin tried to be too much, and ended up becoming nothing. I'm trying to solve that problem in researching alternatives and applying distributed technologies where applicable. Bitcoin/Namecoin offers a Distributed Hash Table, which has some positive aspects, but not useful for DNS, except for root servers.

    Read the article

< Previous Page | 176 177 178 179 180 181 182 183 184 185 186 187  | Next Page >