Search Results

Search found 13892 results on 556 pages for 'employee info starter kit'.

Page 56/556 | < Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >

  • Cannot connect to website - SSL handshaking fails

    - by ravenspoint
    So I cannot connect to certain websites. Just a few, most are OK. The one I really care about is paypal.com. I have done the usual things. Let's see: Checked my etc/hosts Flushed the DNS cache Checked firewall Switched on & off virus protection Switched on and off ad blocking pinged the sites Eventually, I decided to look at what curl is saying in detail == Info: About to connect() to www.paypal.com port 443 (#0) == Info: Trying 66.211.169.2... == Info: connected == Info: SSLv3, TLS handshake, Client hello (1): => Send SSL data, 110 bytes (0x6e) 0000: 01 00 00 6a 03 01 4f 6c aa 8c 57 2b 3d 1e 74 64 ...j..Ol..W+=.td 0010: c1 27 25 a5 3a 12 7f 3f 41 0a 17 15 2e c9 67 7c .'%.:.?A.....g| 0020: b3 e1 f6 9a db a9 00 00 2a 00 39 00 38 00 35 00 ........*.9.8.5. 0030: 16 00 13 00 0a 00 33 00 32 00 2f 00 07 00 05 00 ......3.2./..... 0040: 04 00 15 00 12 00 09 00 14 00 11 00 08 00 06 00 ................ 0050: 03 00 ff 01 00 00 17 00 00 00 13 00 11 00 00 0e ................ 0060: 77 77 77 2e 70 61 79 70 61 6c 2e 63 6f 6d www.paypal.com (hangs here for ever) This looks to me like paypal is refusing to reply to the first SSL handshake. I don't know much about SSL, but compaing to the output from a site that works for me seems to make it obvious == Info: About to connect() to www.cibc.com port 443 (#0) == Info: Trying 159.231.80.200... == Info: connected == Info: SSLv3, TLS handshake, Client hello (1): => Send SSL data, 108 bytes (0x6c) 0000: 01 00 00 68 03 01 4f 6c ad 6a 1f 67 d5 84 c4 4b ...h..Ol.j.g...K 0010: 0d 49 ae d6 b9 5b c3 63 f9 48 aa 18 da 43 d1 32 .I...[.c.H...C.2 0020: 47 ae 17 e5 cd e9 00 00 2a 00 39 00 38 00 35 00 G.......*.9.8.5. 0030: 16 00 13 00 0a 00 33 00 32 00 2f 00 07 00 05 00 ......3.2./..... 0040: 04 00 15 00 12 00 09 00 14 00 11 00 08 00 06 00 ................ 0050: 03 00 ff 01 00 00 15 00 00 00 11 00 0f 00 00 0c ................ 0060: 77 77 77 2e 63 69 62 63 2e 63 6f 6d www.cibc.com == Info: SSLv3, TLS handshake, Server hello (2): <= Recv SSL data, 74 bytes (0x4a) 0000: 02 00 00 46 03 01 00 00 58 cf 26 e2 e1 65 db 11 ...F....X.&..e.. 0010: bc 6f 26 7b 3b 6d eb 14 5f ad 47 dd 86 ea 4d a3 .o&{;m.._.G...M. 0020: fb 9f b7 2a 54 3e 20 5f 6b 04 5a 12 38 64 5d 18 ...*T> _k.Z.8d]. 0030: 65 9e e9 cd 61 eb 91 c1 16 25 61 30 bb 08 2a 78 e...a....%a0..*x 0040: b8 ee b8 7e f2 65 6a 00 04 00 ...~.ej... == Info: SSLv3, TLS handshake, CERT (11): ... and so on - working nicely eventually get some nice HTML Now I am reaaly stuck. This has been going on for five days, so I am pretty sure that the problem is not with paypal. But what on my system could be interfering with the SSL handshaking done by curl with this particular site? I suppose I could not be offering any certificates that PayPal accepts, but wouldn't I get a reply telling me so, or at least giving an error?

    Read the article

  • ActiveMQ AJax Client

    - by Lily
    I try to write a simple Ajax client to send and receive messages. It's successfully deployed but I have never received msg from the client. I am beating myself to think out what I am missing, but still can't make it work. Here is my code: I creat a dynamic web application named: ActiveMQAjaxService and put activemq-web.jar and all neccessary dependencies in the WEB-INF/lib folder. In this way, AjaxServlet and MessageServlet will be deployed I start activemq server in command line: ./activemq = activemq successfully created and display: Listening for connections at: tcp://lilyubuntu:61616 INFO | Connector openwire Started INFO | ActiveMQ JMS Message Broker (localhost, ID:lilyubuntu-56855-1272317001405-0:0) started INFO | Logging to org.slf4j.impl.JCLLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog INFO | jetty-6.1.9 INFO | ActiveMQ WebConsole initialized. INFO | Initializing Spring FrameworkServlet 'dispatcher' INFO | ActiveMQ Console at http://0.0.0.0:8161/admin INFO | Initializing Spring root WebApplicationContext INFO | Connector vm://localhost Started INFO | Camel Console at http://0.0.0.0:8161/camel INFO | ActiveMQ Web Demos at http://0.0.0.0:8161/demo INFO | RESTful file access application at http://0.0.0.0:8161/fileserver INFO | Started [email protected]:8161 3) index.xml, which is the html to test the client: <?xml version="1.0" encoding="UTF-8" ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /> <script type="text/javascript" src="amq/amq.js"></script> <script type="text/javascript">amq.uri='amq';</script> <title>Hello Ajax ActiveMQ</title> </head> <body> <p>Hello World!</p> <script type="text/javascript"> amq.sendMessage("topic://myDetector", "message"); var myHandler = { rcvMessage: function(message) { alert("received "+message); } }; function myPoll(first) { if (first) { amq.addListener('myDetector', 'topic://myDetector', myHandler.rcvMessage); } } amq.addPollHandler(myPoll); 4) Web.xml: ActiveMQ Web Demos Apache ActiveMQ Web Demos org.apache.activemq.brokerURL vm://localhost (I also tried tcp://localhost:61616) The URL of the Message Broker to connect to org.apache.activemq.embeddedBroker true Whether we should include an embedded broker or not <!-- the subscription REST servlet --> <servlet> <servlet-name>AjaxServlet</servlet-name> <servlet-class>org.apache.activemq.web.AjaxServlet</servlet-class> <load-on-startup>1</load-on-startup> </servlet> <servlet> <servlet-name>MessageServlet</servlet-name> <servlet-class>org.apache.activemq.web.MessageServlet</servlet-class> <load-on-startup>1</load-on-startup> <!-- Uncomment this parameter if you plan to use multiple consumers over REST <init-param> <param-name>destinationOptions</param-name> <param-value>consumer.prefetchSize=1</param-value> </init-param> --> </servlet> <!-- the queue browse servlet --> <filter> <filter-name>session</filter-name> <filter-class>org.apache.activemq.web.SessionFilter</filter-class> </filter> <filter-mapping> <filter-name>session</filter-name> <url-pattern>/*</url-pattern> </filter-mapping> After all of these, I deploy the web-app, and it's successfully deployed, but when I try it out in http://localhost:8080/ActiveMQAjaxService/index.html , nothing happens. I can run the demo portfolioPublisher demo successfully at http://localhost:8161/demo/portfolio/portfolio.html, and see the numbers updated all the time. But for my simple web-app, nothing really works. Any suggestion/hint is welcomed. Thanks so much Lily

    Read the article

  • Asynchrony in C# 5: Dataflow Async Logger Sample

    - by javarg
    Check out this (very simple) code examples for TPL Dataflow. Suppose you are developing an Async Logger to register application events to different sinks or log writers. The logger architecture would be as follow: Note how blocks can be composed to achieved desired behavior. The BufferBlock<T> is the pool of log entries to be process whereas linked ActionBlock<TInput> represent the log writers or sinks. The previous composition would allows only one ActionBlock to consume entries at a time. Implementation code would be something similar to (add reference to System.Threading.Tasks.Dataflow.dll in %User Documents%\Microsoft Visual Studio Async CTP\Documentation): TPL Dataflow Logger var bufferBlock = new BufferBlock<Tuple<LogLevel, string>>(); ActionBlock<Tuple<LogLevel, string>> infoLogger =     new ActionBlock<Tuple<LogLevel, string>>(         e => Console.WriteLine("Info: {0}", e.Item2)); ActionBlock<Tuple<LogLevel, string>> errorLogger =     new ActionBlock<Tuple<LogLevel, string>>(         e => Console.WriteLine("Error: {0}", e.Item2)); bufferBlock.LinkTo(infoLogger, e => (e.Item1 & LogLevel.Info) != LogLevel.None); bufferBlock.LinkTo(errorLogger, e => (e.Item1 & LogLevel.Error) != LogLevel.None); bufferBlock.Post(new Tuple<LogLevel, string>(LogLevel.Info, "info message")); bufferBlock.Post(new Tuple<LogLevel, string>(LogLevel.Error, "error message")); Note the filter applied to each link (in this case, the Logging Level selects the writer used). We can specify message filters using Predicate functions on each link. Now, the previous sample is useless for a Logger since Logging Level is not exclusive (thus, several writers could be used to process a single message). Let´s use a Broadcast<T> buffer instead of a BufferBlock<T>. Broadcast Logger var bufferBlock = new BroadcastBlock<Tuple<LogLevel, string>>(     e => new Tuple<LogLevel, string>(e.Item1, e.Item2)); ActionBlock<Tuple<LogLevel, string>> infoLogger =     new ActionBlock<Tuple<LogLevel, string>>(         e => Console.WriteLine("Info: {0}", e.Item2)); ActionBlock<Tuple<LogLevel, string>> errorLogger =     new ActionBlock<Tuple<LogLevel, string>>(         e => Console.WriteLine("Error: {0}", e.Item2)); ActionBlock<Tuple<LogLevel, string>> allLogger =     new ActionBlock<Tuple<LogLevel, string>>(     e => Console.WriteLine("All: {0}", e.Item2)); bufferBlock.LinkTo(infoLogger, e => (e.Item1 & LogLevel.Info) != LogLevel.None); bufferBlock.LinkTo(errorLogger, e => (e.Item1 & LogLevel.Error) != LogLevel.None); bufferBlock.LinkTo(allLogger, e => (e.Item1 & LogLevel.All) != LogLevel.None); bufferBlock.Post(new Tuple<LogLevel, string>(LogLevel.Info, "info message")); bufferBlock.Post(new Tuple<LogLevel, string>(LogLevel.Error, "error message")); As this block copies the message to all its outputs, we need to define the copy function in the block constructor. In this case we create a new Tuple, but you can always use the Identity function if passing the same reference to every output. Try both scenarios and compare the results.

    Read the article

  • Booting the liveCD/USB in EFI mode fails on Samsung Tablet XE700T1A

    - by F.L.
    My tablet is Samsung Series 7 Slate (XE700T1A-A02FR (French Language)). It operates an Intel Sandy Bridge architecture. The main issue about this tablet is that it ships with an installed Windows 7 in (U)EFI mode (GPT partition table, etc.), so I'd like to get an EFI dual boot with Ubuntu. But it seems I can't boot on the liveCD in EFI mode. It starts loading (up to initrd), but I then get a blank (black) screen. I've tried the nomodeset kernel option (as well as removing quiet and splash) with no luck. [2012-09-27] I have used the Ubuntu 12.04.1 Desktop ISO (I have read somewhere that it is the only one that can boot in EFI mode). I'd say this has something to do with UEFI since the LiveCD boots in bios mode but not in efi mode. Besides, I am not sure my boot info will help, since I can't boot the LiveCD in EFI mode. As a result I can't install ubuntu in EFI mode. So it would be the boot info from the liveCD boot in bios mode. This happens on a ubuntu-12.04.1-desktop-amd64 iso used on a LiveUSB. Live USB was created by dd'ing the iso onto the full disk device (i.e. /dev/sdx no number) of the Flash drive. I have also tried copying the LiveCD files on a primary GPT partition, but with no luck, I just get the grub shell, no menu, no install option. [2012-09-28] I tried today a flash drive created with Ubuntu's Startup Disk Creator and the alternate 12.04.1 64 bit ISO. I get a grub menu in text mode (which meens it did start in efi mode) with install options / test options. But when I start any of these, I simply get a black screen (no cursor, neither mouse nor text-mode cursor). I tried removing the 'quiet' option and adding nomodeset and acpi=off, but it didn't do any good. So this is the same result as for the LiveCD. [2012-10-01] I have tried with a version of the secure remix version via usb-creator-gtk. The boot on the USB key has the same symptoms. Boot in EFI mode is impossible (I have menu but whatever entry I choose, I get the blank screen problem). The boot in BIOS mode works, I did the install. Then I used boot-repair to try installing grub-efi and get a system that would boot in efi mode. But I can't boot this system, because the EFI firmware doesn't seem to detect that sda contains a valid efi partition. Here is the resulting boot-info Boot info 1253554 [2012-10-01] Today, I have reinstalled the pre-shipped version of windows 7, and then installed ubuntu from a secure-remix iso dumped on USB flash drive vie usb-creator-gtk booted in BIOS mode. When install ended, I said "continue testing" then I used boot-repair to try get the bootloader installed. Now, when I boot the tablet, I get the grub menu, it can chainload windows 7 flawlessly. But when I try to start one of the ubuntu options I get the same old blank screen. Here is the new boot-info: Boot info 1253927 [2012-10-01] I tried installing the 3.3 kernel by chrooting a live usb boot (secure remix again) into the installed system. Same symptoms. I feel the key to this is that the device's efi firmware (which is EFI v2.0) would expose the graphics hardware in a way that prevents the kernel to initialize it, and thus prevents it from booting (the kernel stops all drive access just after the screen turns kind of very dark purple). Here is some info on the UEFI firmware as given by rEFInd: EFI revision: 2.00 Platform: x86_64 (64 bit) Firmware: American Megatrends 4.635 Screen Output: Graphics Output (UEFI), 800x600 [2012-10-08] This week end I tried loading the kernel with elilo. Eventhough I didn't have more luck on booting the kernel, elilo gives more info when loading the kernel. I think the next step is trying to load a kernel with EFI stub directly.

    Read the article

  • How to use symbolic links in windows server 2008R2 across the network (mklink)

    - by server info
    I have One Server (Srv1) which holds data with file shares and the storage is full. Now I have second Server (Srv2) which has alot more space. No I would like to transfer all the data von Serv1 to Serv2 and have links to the new destination. I found mklink very useful here but unfortunately it does not work over the network. Which also points the docu out. People heavily rely on the path's so it would be helpful if somone has a pointer for me... how to handle symbolic links a cross the network with Windows Servers. I am running Windows Server 2008. Thanks for any help

    Read the article

  • Deck from London UG 20110616 - Building a Reporting Brick capable of 1.2GBytes/sec and 80K IOs/sec for less than £2K

    - by tonyrogerson
    The Reporting Brick concept is not really anything new, it starts the walk toward bringing the work Jim Gray and Tom Barclay et al did on CyberBricks up-to-date in terms of current kit. A reporting brick is simply a box built from commodity kit utilising commodity SSD, namely the OCZ IBIS drives to gain extremely high levels of performance for a fraction of the cost required for typical server and san installs today. I'll write up over the next few months as I work further on the concept, for now the deck attached summarises some of the ideas around it, the deck was presented at last nights London SQL Server User Group, I will be presenting it again in Edinburgh on the 29th June and other locations later in the year. Deck: Commodity Kit.pptx  

    Read the article

  • Useful Sharepoint Goodies

    - by Patrick Olurotimi Ige
    I came across this list of very interesting stuff below (and it could save lots for time) 1. Faceted Search: http://facetedsearch.codeplex.com/ 2. Podcasting Kit for SharePoint: http://pks.codeplex.com/ 3. Knowledge Base: http://spkb.codeplex.com/ 4. SharePoint Branding Tool: http://brandingtool.codeplex.com/ 5. SharePoint User Account Control: http://spuac.codeplex.com/ 6. SharePoint Enhanced Calendar: http://spenhancedcalendar.codeplex.com/ 7. Enhanced Discussion Board: http://edb.codeplex.com/ 8. Wildcard Search: http://spwildcardsearch.codeplex.com/ 9. SharePoint Usage Logging Kit: http://sulk.codeplex.com/ 10. SharePoint Zip: http://sharepointzip.codeplex.com/ 11. Facebook Kit for SharePoint: http://fks.codeplex.com/ 12. Short Messages: http://spmessaging.codeplex.com/ 13. Color coded calendar: http://planetwilson.codeplex.com/Release/ProjectReleases.aspx?ReleaseId=11814 14. Most Popular Pages on SharePoint: http://popularpages.codeplex.com/   Thanks to my two bits  heput the list together

    Read the article

  • Banned Children’s Toys from Christmases Past

    - by Jason Fitzpatrick
    What could possibly go wrong giving a child a nuclear science kit that includes highly poisonous materials inside? Everything of course, which is why that particular toy only lasted a single holiday. Buzzfeed reports on some of the toys of holidays past that were quickly pulled off the shelves. In regard to the nuclear kit pictured here, they write: Only available from 1951–1952, this science kit for CHILDREN included four types of uranium ore, a Geiger counter, a comic called Dagwood Spits the Atom, and a coupon for ordering MORE radioactive materials. One of the four uranium ores included was Po-210 (Polonium) which, by mass, is 250,000 times more toxic than hydrogen cyanide. “Merry Christmas, Kevin, here’s that giant box of poison you asked for.” Hit up the link below for more entries, including some pulled from the shelves as recently as 2007. 8 Banned Children’s Toys From Yesteryear [BuzzFeed] Our Geek Trivia App for Windows 8 is Now Available Everywhere How To Boot Your Android Phone or Tablet Into Safe Mode HTG Explains: Does Your Android Phone Need an Antivirus?

    Read the article

  • Fusion HCM SaaS – Integration

    - by Kiran Mundy
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Fusion HCM SaaS – Integration A typical implementation pattern we’re seeing with Fusion Apps early adopters is implementing a few Fusion HCM applications that bring the most benefit to their company with the least disruption to existing programs and interfaces. Very often this ends up being Fusion Goals & Performance, Talent, Compensation or Benefits, often with Taleo for recruiting. The implementation picture looks like what you see below: Here, you can see that all the “downstream integrations” from the On-Premise Core HR, are unaffected because the master for employee data is still your On-Premise Core HR system – all updates and new hires are made here (although they may be fed in from Taleo to start with). As a second phase when customers migrate Core HR to Fusion HCM, they have to come up with a strategy to manage integrations to all their downstream applications that require employee details. For customers coming from EBS HR, a short term strategy that allows for minimal impact, is to extract employee data from Fusion (Via HCM Extract), and load the shared EBS HR tables (which are part of an EBS Financials install anyways), and let your downstream integrations continue to function based on this data as shown below. If you are not coming from EBS HR and there are license implications, you may want to consider: Creating an On-Premise warehouse for extracting data from Fusion Apps. Leveraging Fusion Apps Web Services (available to SaaS customers starting R7) to directly retrieve/write data to Fusion Apps. Integration Tools File Based Loader This is the primary mechanism for loading HCM data (both initial load and incremental updates) into Fusion HCM. Employee & related data can be uploaded into Fusion HCM using File Based Loader. Note that ability to schedule File Based Loader to run on a pre-defined schedule will be available as a patch on top of Rel 5. Hr2Hr has been deprecated in favor of File Based Loader, but for existing customers using Hr2Hr, here are some sample scripts that show how to get more informative error messages. They can be run by creating data model sql queries in BI Publisher. The scripts currently have hard coded values for request id and loader batch id, which your developer will need to update to the correct values for you. The BI Publisher Training Session recorded on Apr 18th is available here (under "Recordings"). This will enable a somewhat technical resource to create a data model sql query. Links to Documentation & Traning Reference documentation for File Based Loader on docs.oracle.com FBL 1.1 MOS Doc Id 1533860.1 Sample demo data files for File Based Loader HCM SaaS Integrations ppt and recording. EBS api's Loading Information into EBS Full or Shared HCM This could be candidate information being loaded from Taleo into EBS or  Employee information being loaded from Fusion HCM into an EBS shared HR install (for downstream applications & EBS Financials). Oracle HRMS Product Family Publicly Callable Business Process APIs (A Reference Consolidation) [ID 216838.1] This is a guide to the EBS R12 Integration Repository accessible from an EBS instance. EBS HRMS Publicly Callable Business Process APIs in Release 11i & 12 [ID 121964.1] Fusion HCM Extract Fusion HCM Extract is the primary mechanism used to extract employee information from Fusion HCM. Refer to the "Configure Identity Sync" doc on MOS  for additional mechanisms. Additional documentation (you'll need an oracle.com account to access) HCM Extracts User Guides (Rel 4 & 5) HCM Extract Entity/Attributes (Rel 5) HCM Extract User Guide (Rel 5) If you don’t have an oracle.com account, download the zipped HCM Extract Rel 5 Docs (Click on File --> Download on next screen). View Training Recordings on Fusion HCM Extract Benefits Extract To setup the benefits extract, refer to the following guide. Page 2-15 of the User Documentation describes how to use the benefits extract. Benefit enrollments can also be uploaded into Fusion Benefits. Instructions are here along with a sample upload file. However, if the defined benefits extract does not meet your requirements, you can use BI Publisher (Link to BI Publisher presentation recording from Apr 18th) to create your own version of Benefits extract. You can start with the data model query underlying the benefits extract. Payroll Interface Fusion Payroll Interface enables you to capture personal payroll information, such as earnings and deductions, along with other data from Oracle Fusion Human Capital Management, and send that information to a third-party payroll provider. Documentation: Payroll interface guide Sample file DBI's used for the payroll interface.Usage Patterns always accessible @ http://www.finapps.com Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";}

    Read the article

  • Implementing an Interceptor Using NHibernate’s Built In Dynamic Proxy Generator

    - by Ricardo Peres
    NHibernate 3.2 came with an included proxy generator, which means there is no longer the need – or the possibility, for that matter – to choose Castle DynamicProxy, LinFu or Spring. This is actually a good thing, because it means one less assembly to deploy. Apparently, this generator was based, at least partially, on LinFu. As there are not many tutorials out there demonstrating it’s usage, here’s one, for demonstrating one of the most requested features: implementing INotifyPropertyChanged. This interceptor, of course, will still feature all of NHibernate’s functionalities that you are used to, such as lazy loading, and such. We will start by implementing an NHibernate interceptor, by inheriting from the base class NHibernate.EmptyInterceptor. This class does not do anything by itself, but it allows us to plug in behavior by overriding some of its methods, in this case, Instantiate: 1: public class NotifyPropertyChangedInterceptor : EmptyInterceptor 2: { 3: private ISession session = null; 4:  5: private static readonly ProxyFactory factory = new ProxyFactory(); 6:  7: public override void SetSession(ISession session) 8: { 9: this.session = session; 10: base.SetSession(session); 11: } 12:  13: public override Object Instantiate(String clazz, EntityMode entityMode, Object id) 14: { 15: Type entityType = Type.GetType(clazz); 16: IProxy proxy = factory.CreateProxy(entityType, new _NotifyPropertyChangedInterceptor(), typeof(INotifyPropertyChanged)) as IProxy; 17: 18: _NotifyPropertyChangedInterceptor interceptor = proxy.Interceptor as _NotifyPropertyChangedInterceptor; 19: interceptor.Proxy = this.session.SessionFactory.GetClassMetadata(entityType).Instantiate(id, entityMode); 20:  21: this.session.SessionFactory.GetClassMetadata(entityType).SetIdentifier(proxy, id, entityMode); 22:  23: return (proxy); 24: } 25: } Then we need a class that implements the NHibernate dynamic proxy behavior, let’s place it inside our interceptor, because it will only need to be used there: 1: class _NotifyPropertyChangedInterceptor : NHibernate.Proxy.DynamicProxy.IInterceptor 2: { 3: private PropertyChangedEventHandler changed = delegate { }; 4:  5: public Object Proxy 6: { 7: get; 8: set;} 9:  10: #region IInterceptor Members 11:  12: public Object Intercept(InvocationInfo info) 13: { 14: Boolean isSetter = info.TargetMethod.Name.StartsWith("set_") == true; 15: Object result = null; 16:  17: if (info.TargetMethod.Name == "add_PropertyChanged") 18: { 19: PropertyChangedEventHandler propertyChangedEventHandler = info.Arguments[0] as PropertyChangedEventHandler; 20: this.changed += propertyChangedEventHandler; 21: } 22: else if (info.TargetMethod.Name == "remove_PropertyChanged") 23: { 24: PropertyChangedEventHandler propertyChangedEventHandler = info.Arguments[0] as PropertyChangedEventHandler; 25: this.changed -= propertyChangedEventHandler; 26: } 27: else 28: { 29: result = info.TargetMethod.Invoke(this.Proxy, info.Arguments); 30: } 31:  32: if (isSetter == true) 33: { 34: String propertyName = info.TargetMethod.Name.Substring("set_".Length); 35: this.changed(this.Proxy, new PropertyChangedEventArgs(propertyName)); 36: } 37:  38: return (result); 39: } 40:  41: #endregion 42: } What this does for every interceptable method (those who are either virtual or from the INotifyPropertyChanged) is: For methods that came from the INotifyPropertyChanged interface, add_PropertyChanged and remove_PropertyChanged (yes, events are methods ), we add an implementation that adds or removes the event handlers to the delegate which we declared as changed; For all the others, we direct them to the place where they are actually implemented, which is the Proxy field; If the call is setting a property, it fires afterwards the PropertyChanged event. In order to use this, we need to add the interceptor to the Configuration before building the ISessionFactory: 1: using (ISessionFactory factory = cfg.SetInterceptor(new NotifyPropertyChangedInterceptor()).BuildSessionFactory()) 2: { 3: using (ISession session = factory.OpenSession()) 4: using (ITransaction tx = session.BeginTransaction()) 5: { 6: Customer customer = session.Get<Customer>(100); //some id 7: INotifyPropertyChanged inpc = customer as INotifyPropertyChanged; 8: inpc.PropertyChanged += delegate(Object sender, PropertyChangedEventArgs e) 9: { 10: //fired when a property changes 11: }; 12: customer.Address = "some other address"; //will raise PropertyChanged 13: customer.RecentOrders.ToList(); //will trigger the lazy loading 14: } 15: } Any problems, questions, do drop me a line!

    Read the article

  • IRM Item Codes &ndash; what are they for?

    - by martin.abrahams
    A number of colleagues have been asking about IRM item codes recently – what are they for, when are they useful, how can you control them to meet some customer requirements? This is quite a big topic, but this article provides a few answers. An item code is part of the metadata of every sealed document – unless you define a custom metadata model. The item code is defined when a file is sealed, and usually defaults to a timestamp/filename combination. This time/name combo tends to make item codes unique for each new document, but actually item codes are not necessarily unique, as will become clear shortly. In most scenarios, item codes are not relevant to the evaluation of a user’s rights - the context name is the critical piece of metadata, as a user typically has a role that grants access to an entire classification of information regardless of item code. This is key to the simplicity and manageability of the Oracle IRM solution. Item codes are occasionally exposed to users in the UI, but most users probably never notice and never care. Nevertheless, here is one example of where you can see an item code – when you hover the mouse pointer over a sealed file. As you see, the item code for this freshly created file combines a timestamp with the file name. But what are item codes for? The first benefit of item codes is that they enable you to manage exceptions to the policy defined for a context. Thus, I might have access to all oracle – internal files - except for 2011_03_11 13:33:29 Board Minutes.sdocx. This simple mechanism enables Oracle IRM to provide file-by-file control where appropriate, whilst offering the scalability and manageability of classification-based control for the majority of users and content. You really don’t want to be managing each file individually, but never say never. Item codes can also be used for the opposite effect – to include a file in a user’s rights when their role would ordinarily deny access. So, you can assign a role that allows access only to specified item codes. For example, my role might say that I have access to precisely one file – the one shown above. So how are item codes set? In the vast majority of scenarios, item codes are set automatically as part of the sealing process. The sealing API uses the timestamp and filename as shown, and the user need not even realise that this has happened. This automatically creates item codes that are for all practical purposes unique - and that are also intelligible to users who might want to refer to them when viewing or assigning rights in the management UI. It is also possible for suitably authorised users and applications to set the item code manually or programmatically if required. Setting the item code manually using the IRM Desktop The manual process is a simple extension of the sealing task. An authorised user can select the Advanced… sealing option, and will see a dialog that offers the option to specify the item code. To see this option, the user’s role needs the Set Item Code right – you don’t want most users to give any thought at all to item codes, so by default the option is hidden. Setting the item code programmatically A more common scenario is that an application controls the item code programmatically. For example, a document management system that seals documents as part of a workflow might set the item code to match the document’s unique identifier in its repository. This offers the option to tie IRM rights evaluation directly to the security model defined in the document management system. Again, the sealing application needs to be authorised to Set Item Code. The Payslip Scenario To give a concrete example of how item codes might be used in a real world scenario, consider a Human Resources workflow such as a payslips. The goal might be to allow the HR team to have access to all payslips, but each employee to have access only to their own payslips. To enable this, you might have an IRM classification called Payslips. The HR team have a role in the normal way that allows access to all payslips. However, each employee would have an Item Reader role that only allows them to access files that have a particular item code – and that item code might match the employee’s payroll number. So, employee number 123123123 would have access to items with that code. This shows why item codes are not necessarily unique – you can deliberately set the same code on many files for ease of administration. The employees might have the right to unseal or print their payslip, so the solution acts as a secure delivery mechanism that allows payslips to be distributed via corporate email without any fear that they might be accessed by IT administrators, or forwarded accidentally to anyone other than the intended recipient. All that remains is to ensure that as each user’s payslip is sealed, it is assigned the correct item code – something that is easily managed by a simple IRM sealing application. Each month, an employee’s payslip is sealed with the same item code, so you do not need to keep amending the list of items that the user has access to – they have access to all documents that carry their employee code.

    Read the article

  • What is the point of the prototype method?

    - by Mild Fuzz
    I am reading through Javascript: The Good Parts, and struggled to get my head around the section on prototypes. After a little google, I came to the conclusion that it is to add properties to objects after the objects declaration. Using this script gleamed from w3schools, I noticed that removing the line adding the prototype property had no effect. So what is the point? //Prototyping function employee(name,jobtitle,born) { this.name=name; this.jobtitle=jobtitle; this.born=born; } var fred=new employee("Fred Flintstone","Caveman",1970); employee.prototype.salary=null; // <--- try removing this line fred.salary=20000; document.write(fred.salary);

    Read the article

  • 2D game big background images for maps

    - by WhiteCat
    Update: this question is general, not specific to Sprite Kit or a single language/platform. I'm toying with Sprite Kit with an idea to make a 2D side-scroller. Now the backgrounds for the maps are going to be hand-drawn and surely bigger than retina display, so the maps could span more than 1 screen in both axis. I imagine loading such a huge image could mean trouble and I don't plan to use tiling. I'm not sure how Sprite Kit splits images bigger than max texture size, if it does. I could split the images myself and use more sprites for each part of the background. What is the usual way to handle this?

    Read the article

  • Repositories and the Save Method

    One of the questions I've been getting lately goes like this: Should a Repository class have a Save method? And the standard answer: It depends. It's hard to keep track of all the different approaches to implementing the repository pattern these days, but I assume when someone asks me this type of question they are thinking of using code like this: var employee = new Employee(); employeeRepository.Save(employee); var account = new Account(); accountRepository.Save(account); This...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Debugging apache seg fault with gdb

    - by Joyce Babu
    Apache on a production server of mine is seg faulting intermittently. I have enabled core dump option in apache configuration and have several dumped core files. Unfortunately, since it is a production server, apache or the loaded modules are not compiled with debug symbols. From what I understand, gdb cannot do much without debug symbols. Can I at least find out which module is causing the seg fault, without debug symbols? If so, how? Following is the output from a gdb backtrace (gdb) bt full #0 0xb7f1f832 in _dl_sysinfo_int80 () from /lib/ld-linux.so.2 No symbol table info available. #1 0xb7be82bc in pthread_cond_wait@@GLIBC_2.3.2 () from /lib/libpthread.so.0 No symbol table info available. #2 0xb771652a in ?? () from /usr/local/apache/modules/mod_pagespeed.so No symbol table info available. #3 0xb75df576 in ?? () from /usr/local/apache/modules/mod_pagespeed.so No symbol table info available. #4 0xb7715c20 in ?? () from /usr/local/apache/modules/mod_pagespeed.so No symbol table info available. #5 0xb7be4a49 in start_thread () from /lib/libpthread.so.0 No symbol table info available. #6 0xb7b2a63e in clone () from /lib/libc.so.6 No symbol table info available. Does this mean that /lib/ld-linux.so.2 is causing the seg fault?

    Read the article

  • Tracking a subdomain serately within the main domain account [closed]

    - by Vinay
    I have a website, for example: xyz.com and info.xyz.com. I created a profile for xyz and tracking is good. I added a new profile for info.xyz.com in xyz.com. Analytics tracking for info.xyz.com is showing traffic from both xyz.com and info.xyz.com. How do I change to show only info.xyz traffic in the info.xyz.com profile. I used the following code: Analytics code for xyz.com domain: <script type="text/javascript"> var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-xxxxxx-x']); _gaq.push(['_trackPageview']); (function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })(); </script> Analytics code for info.xyz.com <script type="text/javascript"> var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-xxxxxx-x']); _gaq.push(['_setDomainName', 'xyz.com']); _gaq.push(['_trackPageview']); (function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })(); </script>

    Read the article

  • how to rewrite '%25' in url

    - by nn4l
    My website software replaces space characters with '+' characters in the URL, A proper link would look like 'http://www.schirmacher.de/display/INFO/How+to+reattach+a+disk+to+XenServer' for example. Some websites link to that article but somehow their embedded editor can't handle the encoding, so what I see in the httpd log files is actually GET /display/INFO/How%2525252bto%2525252breattach%2525252ba%2525252bdisk%2525252bto%2525252bXenServer which of course leads to a 404 error. It seems that the '+' character is encoded as '%2b' and then the '%' character is encoded as '%25' - several times. Since there are many such references to different pages from different websites, I would like to rewrite the url so that the visitors get the correct page. Here's my attempt which does not work: RewriteRule ^(.*)%25(.*)$ $1%$2 [R=301] What it is supposed to do is: take everything before the %25 string and everything after it, concat those strings with a '%' in between, then redirect. With the example input URL the rule should rewrite to /display/INFO/How%25252bto%2525252breattach%2525252ba%2525252bdisk%2525252bto%2525252bXenServer followed by a redirect, then it should rewrite to /display/INFO/How%252bto%2525252breattach%2525252ba%2525252bdisk%2525252bto%2525252bXenServer and again to /display/INFO/How%2bto%2525252breattach%2525252ba%2525252bdisk%2525252bto%2525252bXenServer and so on. Finally, after a lot of redirects I should have left /display/INFO/How%2bto%2breattach%2ba%2bdisk%2bto%2bXenServer which is a valid url equivalent to /display/INFO/How+to+reattach+a+disk+to+XenServer. My problem is that the expression does not match at all, so it does not even replace a single occurrence of %25. I understand that there is a limit in the number of redirects and I should really use the [N] flag however I don't even get the first step right.

    Read the article

  • GAE Datastore Put()

    - by Ivan Slaughter
    def post(self): update = self.request.get('update') if users.get_current_user(): if update: personal = db.GqlQuery("SELECT * FROM Personal WHERE __key__ = :1", db.Key(update)) personal.name = self.request.get('name') personal.gender = self.request.get('gender') personal.mobile_num = self.request.get('mobile_num') personal.birthdate = int(self.request.get('birthdate')) personal.birthplace = self.request.get('birthplace') personal.address = self.request.get('address') personal.geo_pos = self.request.get('geo_pos') personal.info = self.request.get('info') photo = images.resize(self.request.get('img'), 0, 80) personal.photo = db.Blob(photo) personal.put() self.redirect('/admin/personal') else: personal= Personal() personal.name = self.request.get('name') personal.gender = self.request.get('gender') personal.mobile_num = self.request.get('mobile_num') personal.birthdate = int(self.request.get('birthdate')) personal.birthplace = self.request.get('birthplace') personal.address = self.request.get('address') personal.geo_pos = self.request.get('geo_pos') personal.info = self.request.get('info') photo = images.resize(self.request.get('img'), 0, 80) personal.photo = db.Blob(photo) personal.put() self.redirect('/admin/personal') else: self.response.out.write('I\'m sorry, you don\'t have permission to add this LP Personal Data.') Should this will update the existing record if the 'update' is querystring containing key datastore key. I try this but keep adding new record/entity. Please give me some sugesstion to correctly updating the record/entity. Correction? : def post(self): update = self.request.get('update') if users.get_current_user(): if update: personal = Personal.get(db.Key(update)) personal.name = self.request.get('name') personal.gender = self.request.get('gender') personal.mobile_num = self.request.get('mobile_num') personal.birthdate = int(self.request.get('birthdate')) personal.birthplace = self.request.get('birthplace') personal.address = self.request.get('address') personal.geo_pos = self.request.get('geo_pos') personal.info = self.request.get('info') photo = images.resize(self.request.get('img'), 0, 80) personal.photo = db.Blob(photo) personal.put() self.redirect('/admin/personal') else: personal= Personal() personal.name = self.request.get('name') personal.gender = self.request.get('gender') personal.mobile_num = self.request.get('mobile_num') personal.birthdate = int(self.request.get('birthdate')) personal.birthplace = self.request.get('birthplace') personal.address = self.request.get('address') personal.geo_pos = self.request.get('geo_pos') personal.info = self.request.get('info') photo = images.resize(self.request.get('img'), 0, 80) personal.photo = db.Blob(photo) personal.put() self.redirect('/admin/personal') else: self.response.out.write('I\'m sorry, you don\'t have permission to add this LP Personal Data.')

    Read the article

  • Maven jetty download dependencies

    - by portoalet
    Hi, Why does every time I do "mvn jetty:run", maven tries to download some dependencies (apache poi and ojdbc jars) ? How can I disable this? [INFO] Scanning for projects.. [INFO] Searching repository for plugin with prefix: 'jetty'. [INFO] ------------------------------------------------------------------------ [INFO] Building infolitReport [INFO] task-segment: [jetty:run] [INFO] ------------------------------------------------------------------------ [INFO] Preparing jetty:run Downloading: http://repository.springsource.com/maven/bundles/release/org/apache/poi/com.springsource.org.apache.poi/3.6/com.springsource.org.apache.poi-3.6.pom Downloading: http://repository.springsource.com/maven/bundles/external/org/apache/poi/com.springsource.org.apache.poi/3.6/com.springsource.org.apache.poi-3.6.pom Downloading: http://repository.springsource.com/maven/bundles/milestone/org/apache/poi/com.springsource.org.apache.poi/3.6/com.springsource.org.apache.poi-3.6.pom Downloading: http://repository.springsource.com/maven/bundles/snapshot/org/apache/poi/com.springsource.org.apache.poi/3.6/com.springsource.org.apache.poi-3.6.pom Downloading: http://repo1.maven.org/maven2/org/apache/poi/com.springsource.org.apache.poi/3.6/com.springsource.org.apache.poi-3.6.pom Downloading: http://repository.springsource.com/maven/bundles/release/com/oracle/ojdbc14/10.2.0.2/ojdbc14-10.2.0.2.pom Downloading: http://repository.springsource.com/maven/bundles/external/com/oracle/ojdbc14/10.2.0.2/ojdbc14-10.2.0.2.pom Downloading: http://repository.springsource.com/maven/bundles/milestone/com/oracle/ojdbc14/10.2.0.2/ojdbc14-10.2.0.2.pom Downloading: http://repository.springsource.com/maven/bundles/snapshot/com/oracle/ojdbc14/10.2.0.2/ojdbc14-10.2.0.2.pom Downloading: http://repo1.maven.org/maven2/com/oracle/ojdbc14/10.2.0.2/ojdbc14-10.2.0.2.pom [INFO] [aspectj:compile {execution: default}]

    Read the article

  • loading content from php file with jQuery

    - by Billa
    The following code fetch all data (by clicking a.info link) from a php file "info.php" and prints in the #content div. The problem is that it prints everything from info.php file. Can I possibly select only some part of data from info.php file to load in #content? The reason to ask this question is that, I want to load different data from the same php file for the different links. $("a.info").click(function(){ var id=$(this).attr("id"); $("#box").slideDown("slow"); $.ajax({ type: "POST", data: "id="+$(this).attr("id"), url: "info.php", success: function(html){ $("#content").html(html); } }); }); Html where content is loading: <div id="box"> <div id="content"></div> </div> info.php paragraph1. paragraph2. For example, In the above info.php file, i only want to load paragraph1 in the #content. I hope my question is clear. Any help will be appreciated.

    Read the article

  • HTMLAgilitypack getting <P> and <STRONG> text

    - by StealthRT
    Hey all i am looking for a way to get this HTML code: <DIV class=channel_row><SPAN class=channel> <DIV class=logo><IMG src='/images/channel_logos/WGNAMER.png'></DIV> <P><STRONG>2</STRONG><BR>WGNAMER </P></SPAN> using the HtmlAgilityPack. I have been trying this: With channel info!Logo = .SelectSingleNode(".//img").Attributes("src").Value info!Channel = .SelectSingleNode(".//span[@class='channel']").ChildNodes(1).ChildNodes(0).InnerText info!Station = .SelectSingleNode(".//span[@class='channel']").ChildNodes(1).ChildNodes(2).InnerText End With I can get the Logo but it comes up with a blank string for the Channel and for the Station it says Index was out of range. Must be non-negative and less than the size of the collection. I've tried all types of combinations: info!Station = .SelectSingleNode(".//span[@class='channel']").ChildNodes(1).ChildNodes(1).InnerText info!Station = .SelectSingleNode(".//span[@class='channel']").ChildNodes(1).ChildNodes(3).InnerText info!Station = .SelectSingleNode(".//span[@class='channel']").ChildNodes(0).ChildNodes(1).InnerText info!Station = .SelectSingleNode(".//span[@class='channel']").ChildNodes(0).ChildNodes(2).InnerText info!Station = .SelectSingleNode(".//span[@class='channel']").ChildNodes(0).ChildNodes(3).InnerText What do i need to do in order to correct this?

    Read the article

  • Starting CLI application programmatically does not work depending on arguments

    - by Daniel Beck
    I try to start plink.exe (PuTTY Link, the command line utility/version of PuTTY) from a C# application to establish an SSH reverse tunnel, but it does no longer work as soon as I pass the correct parameters. What does that mean? The following works as expected: it opens a command line window, displays that I forgot to pass the password for the -pw argument quits, and shows the prompt. I know it got the arguments, since it specifically requests the one thing I did not provide. Uri uri = omitted; ProcessStartInfo info = new ProcessStartInfo(); info.FileName = "cmd"; info.Arguments = "/k \"C:\\Program Files (x86)\\PuTTY\\plink.exe\" -R 3389:" + uri.Host + ":" + uri.Port + " -N -l username -pw"; // TODO pwd Process p = Process.Start(info); I tried the same think with calling plink.exe directly instead of cmd.exe /k, but the window closes immediately, which is unfortunate for bug-hunting. BUT when I pass a password in the arguments, plink.exe displays the program help (showing available parameters) and quits: Uri uri = omitted; ProcessStartInfo info = new ProcessStartInfo(); info.FileName = "cmd"; info.Arguments = "/k \"C:\\Program Files (x86)\\PuTTY\\plink.exe\" -R 3389:" + uri.Host + ":" + uri.Port + " -N -l username -pw secretpassword"; Process p = Process.Start(info); No indication of missing parameters. Both the cmd /k and plink.exe variants do not work (the latter closes immediately, so no information regarding different behaviour). When I launch the application from the Windows 7 Start Menu launcher with the identical parameters, it opens a cmd.exe window and establishes the connection as requested. What's wrong? Is there a way plink.exe notices it's not running in a real shell? If yes, how can I circumvent it, like the Start Menu "prompt" does? I hope this question is right on SO, since it, though specifically for a single application, revolves around launching it successfully programmatically.

    Read the article

  • Maven GAE Plugin - Unable to run gae:debug

    - by Taylor L
    I'm having trouble running the gae:debug goal of the Maven GAE Plugin. The error I'm receiving is below. Any ideas? I'm running it with "mvn gae:debug". [INFO] Packaging webapp [INFO] Assembling webapp[test-gae] in [C:\development\test-gae\target\test-gae-0.0.1-SNAPSHOT] [INFO] Processing war project [INFO] Webapp assembled in[56 msecs] [INFO] Building war: C:\development\test-gae\target\test-gae-0.0.1-SNAPSHOT.war [INFO] [statemgmt:end-fork] [INFO] Ending forked execution [fork id: -2101914270] [INFO] [gae:debug] Usage: <dev-appserver> [options] <war directory> Options: --help, -h Show this help message and exit. --server=SERVER The server to use to determine the latest -s SERVER SDK version. --address=ADDRESS The address of the interface on the local machine -a ADDRESS to bind to (or 0.0.0.0 for all interfaces). --port=PORT The port number to bind to on the local machine. -p PORT --sdk_root=root Overrides where the SDK is located. --disable_update_check Disable the check for newer SDK versions. EDIT: gae:run with the jvmFlags option is also giving me the same result with the below configuration. <plugin> <groupId>net.kindleit</groupId> <artifactId>maven-gae-plugin</artifactId> <version>0.5.0</version> <configuration> <jvmFlags> <jvmFlag>-Xdebug</jvmFlag> <jvmFlag>-Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8000</jvmFlag> </jvmFlags> </configuration> </plugin>

    Read the article

  • Running Hadoop example in psuedo-distributed mode on vm

    - by manas
    I have set-up Hadoop on a OpenSuse 11.2 VM using Virtualbox.I have made the prerequisite configs. I ran this example in the Standalone mode successfully. But in psuedo-distributed mode I get the following error: $./bin/hadoop fs -put conf input 10/04/13 15:56:25 INFO hdfs.DFSClient: Exception in createBlockOutputStream java.net.SocketException: Protocol not available 10/04/13 15:56:25 INFO hdfs.DFSClient: Abandoning block blk_-8490915989783733314_1003 10/04/13 15:56:31 INFO hdfs.DFSClient: Exception in createBlockOutputStream java.net.SocketException: Protocol not available 10/04/13 15:56:31 INFO hdfs.DFSClient: Abandoning block blk_-1740343312313498323_1003 10/04/13 15:56:37 INFO hdfs.DFSClient: Exception in createBlockOutputStream java.net.SocketException: Protocol not available 10/04/13 15:56:37 INFO hdfs.DFSClient: Abandoning block blk_-3566235190507929459_1003 10/04/13 15:56:43 INFO hdfs.DFSClient: Exception in createBlockOutputStream java.net.SocketException: Protocol not available 10/04/13 15:56:43 INFO hdfs.DFSClient: Abandoning block blk_-1746222418910980888_1003 10/04/13 15:56:49 WARN hdfs.DFSClient: DataStreamer Exception: java.io.IOException: Unable to create new block. at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2845) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2102) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2288) 10/04/13 15:56:49 WARN hdfs.DFSClient: Error Recovery for block blk_-1746222418910980888_1003 bad datanode[0] nodes == null 10/04/13 15:56:49 WARN hdfs.DFSClient: Could not get block locations. Source file "/user/max/input/core-site.xml" - Aborting... put: Protocol not available 10/04/13 15:56:49 ERROR hdfs.DFSClient: Exception closing file /user/max/input/core-site.xml : java.net.SocketException: Protocol not available java.net.SocketException: Protocol not available at sun.nio.ch.Net.getIntOption0(Native Method) at sun.nio.ch.Net.getIntOption(Net.java:178) at sun.nio.ch.SocketChannelImpl$1.getInt(SocketChannelImpl.java:419) at sun.nio.ch.SocketOptsImpl.getInt(SocketOptsImpl.java:60) at sun.nio.ch.SocketOptsImpl.sendBufferSize(SocketOptsImpl.java:156) at sun.nio.ch.SocketOptsImpl$IP$TCP.sendBufferSize(SocketOptsImpl.java:286) at sun.nio.ch.OptionAdaptor.getSendBufferSize(OptionAdaptor.java:129) at sun.nio.ch.SocketAdaptor.getSendBufferSize(SocketAdaptor.java:328) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.createBlockOutputStream(DFSClient.java:2873) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2826) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2102) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2288) Any leads will be highly appreciated.

    Read the article

  • Read StandardOutput Process from BackgroundWorkerProcess method

    - by Nicola Celiento
    Hi all, I'm working with a BackgroundWorker Process from Windows .NET C# application. From the async method I call many instance of cmd.exe Process (foreach), and read from StandardOutput, but at the 3-times cycle the ReadToEnd() method throw Timeout Exception. Here the code: StreamWriter sw = null; DataTable dt2 = null; StringBuilder result = null; Process p = null; for(DataRow dr in dt1.Rows) { arrLine = new string[4]; //new row result = new StringBuilder(); arrLine[0] = dr["ID"].ToString(); arrLine[1] = dr["ChangeSet"].ToString(); #region Initialize Process CMD p = new Process(); ProcessStartInfo info = new ProcessStartInfo(); info.FileName = "cmd.exe"; info.CreateNoWindow = true; info.RedirectStandardInput = true; info.RedirectStandardOutput = true; info.RedirectStandardError = true; info.UseShellExecute = false; p.StartInfo = info; p.Start(); sw = new StreamWriter(p.StandardInput.BaseStream); #endregion using (sw) { if (sw.BaseStream.CanWrite) { sw.WriteLine("cd " + this.path_WORKDIR); sw.WriteLine("tfpt getcs /changeset:" + arrLine[1]); } } string strError = p.StandardError.ReadToEnd(); // Read shell error output commmand (TIMEOUT) result.Append(p.StandardOutput.ReadToEnd()); // Read shell output commmand sw.Close(); sw.Dispose(); p.Close(); p.Dispose(); } Timeout is on code line: string strError = p.StandardError.ReadToEnd(); Can you help me? Thanks!

    Read the article

< Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >