Search Results

Search found 2046 results on 82 pages for 'agent ransack'.

Page 65/82 | < Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >

  • Can't connect to certain HTTPS sites

    - by mind.blank
    I've just moved to a new apartment and with internet connection via a router and I'm finding that I can't connect to quite a few sites that use SSL. For example trying to connect to PayPal: curl -v https://paypal.com * About to connect() to paypal.com port 443 (#0) * Trying 66.211.169.3... connected * successfully set certificate verify locations: * CAfile: none CApath: /etc/ssl/certs * SSLv3, TLS handshake, Client hello (1): * Unknown SSL protocol error in connection to paypal.com:443 * Closing connection #0 curl: (35) Unknown SSL protocol error in connection to paypal.com:443 curl -v -ssl https://paypal.com gives the same output. For some sites it works: curl -v https://www.google.com * About to connect() to www.google.com port 443 (#0) * Trying 74.125.235.112... connected * successfully set certificate verify locations: * CAfile: none CApath: /etc/ssl/certs * SSLv3, TLS handshake, Client hello (1): * SSLv3, TLS handshake, Server hello (2): * SSLv3, TLS handshake, CERT (11): * SSLv3, TLS handshake, Server key exchange (12): * SSLv3, TLS handshake, Server finished (14): * SSLv3, TLS handshake, Client key exchange (16): * SSLv3, TLS change cipher, Client hello (1): * SSLv3, TLS handshake, Finished (20): * SSLv3, TLS change cipher, Client hello (1): * SSLv3, TLS handshake, Finished (20): * SSL connection using ECDHE-RSA-RC4-SHA * Server certificate: * subject: C=US; ST=California; L=Mountain View; O=Google Inc; CN=www.google.com * start date: 2011-10-26 00:00:00 GMT * expire date: 2013-09-30 23:59:59 GMT * common name: www.google.com (matched) * issuer: C=ZA; O=Thawte Consulting (Pty) Ltd.; CN=Thawte SGC CA * SSL certificate verify ok. > GET / HTTP/1.1 > User-Agent: curl/7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0 OpenSSL/1.0.1 zlib/1.2.3.4 libidn/1.23 librtmp/2.3 > Host: www.google.com > Accept: */* > < HTTP/1.1 302 Found < Location: https://www.google.co.jp/ . . . I'm using Ubuntu 12.04, with Windows 7 installed as well. These sites work on Windows :( Not sure if this information helps but I ran ifconfig and got the following: eth0 Link encap:Ethernet HWaddr 1c:c1:de:bc:e2:4f inet6 addr: 2408:c3:7fff:991:686b:8d18:81b3:8dd1/64 Scope:Global inet6 addr: 2408:c3:7fff:991:1ec1:deff:febc:e24f/64 Scope:Global inet6 addr: fe80::1ec1:deff:febc:e24f/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:87075 errors:0 dropped:0 overruns:0 frame:0 TX packets:54522 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:78167937 (78.1 MB) TX bytes:10016891 (10.0 MB) Interrupt:46 Base address:0x4000 eth1 Link encap:Ethernet HWaddr ac:81:12:0d:93:80 inet6 addr: fe80::ae81:12ff:fe0d:9380/64 Scope:Link UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:498 TX packets:0 errors:26 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:17 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:630 errors:0 dropped:0 overruns:0 frame:0 TX packets:630 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:39592 (39.5 KB) TX bytes:39592 (39.5 KB) ppp0 Link encap:Point-to-Point Protocol inet addr:180.57.228.200 P-t-P:118.23.8.175 Mask:255.255.255.255 UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1492 Metric:1 RX packets:39631 errors:0 dropped:0 overruns:0 frame:0 TX packets:22391 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:3 RX bytes:43462054 (43.4 MB) TX bytes:2834628 (2.8 MB)

    Read the article

  • Perm SSIS Developer Urgently Required

    - by blakmk
      Job Role To provide dedicated data services support to the company, by designing, creating, maintaining and enhancing database objects, ensuring data quality, consistency and integrity. Migrating data from various sources to central SQL 2008 data warehouse will be the primary function. Migration of data from bespoke legacy database’s to SQL 2008 data warehouse. Understand key business requirements, Liaising with various aspects of the company. Create advanced transformations of data, with focus on data cleansing, redundant data and duplication. Creating complex business rules regarding data services, migration, Integrity and support (Best Practices). Experience ·         Minimum 3 year SSIS experience, in a project or BI Development role and involvement in at least 3 full ETL project life cycles, using the following methodologies and tools o    Excellent knowledge of ETL concepts including data migration & integrity, focusing on SSIS. o    Extensive experience with SQL 2005 products, SQL 2008 desirable. o    Working knowledge of SSRS and its integration with other BI products. o    Extensive knowledge of T-SQL, stored procedures, triggers (Table/Database), views, functions in particular coding and querying. o    Data cleansing and harmonisation. o    Understanding and knowledge of indexes, statistics and table structure. o    SQL Agent – Scheduling jobs, optimisation, multiple jobs, DTS. o    Troubleshoot, diagnose and tune database and physical server performance. o    Knowledge and understanding of locking, blocks, table and index design and SQL configuration. ·         Demonstrable ability to understand and analyse business processes. ·         Experience in creating business rules on best practices for data services. ·         Experience in working with, supporting and troubleshooting MS SQL servers running enterprise applications ·         Proven ability to work well within a team and liaise with other technical support staff such as networking administrators, system administrators and support engineers. ·         Ability to create formal documentation, work procedures, and service level agreements. ·         Ability to communicate technical issues at all levels including to a non technical audience. ·         Good working knowledge of MS Word, Excel, PowerPoint, Visio and Project.   Location Based in Crawley with possibility of some remote working Contact me for more info: http://sqlblogcasts.com/blogs/blakmk/contact.aspx      

    Read the article

  • Something about Property Management or &hellip; the understanding of SharePoint Admins/roles ?!?

    - by Enrique Lima
    When I talk about SharePoint, for some reason it comes to my mind as if it were property management and all the tasks associated with it. So, imagine you have a lot ( a piece of land of sorts), you then decide there is something you want to do with it.  So, you make the choice of having a building built.  Now, in order to go forward with your plan, you need to check what the rules/regulations are.  Has is it been zoned residential, commercial, industrial … you get the idea.  This to me sounds like Governance.  The what am I to do given a defined set of rules. We keep on moving forward based on those rules.  And with this we start the process of building, the building process takes us to survey the land, identify what our boundaries are.  And as we go along we start getting the idea in our head as to what we will do as far as the building goes.  We identify the essentials of the building, basic services and such.  All in all, we plan.  And as with many things we do, we like solid foundations.  What a solid foundation looks like will depend on where and what we build.  The way buildings are built depends in many ways in being able to foresee the potential for natural disasters or to try to leverage the lay of the land.  Sound familiar?  We have done our Requirements Gathering. We have the building in place, we have followed the zoning rules, we have implemented services.  But we need someone to manage the building, now we move on to the human side of the story.  We want to establish a means to normalcy in the building, someone that can be the monitoring agent as to the “what’s going on?” of it.  This person will be tasked with making sure all basic services are functional, that measures are taken if there is an issue and so on.  Enter the Farm Administrator. In a way, we establish an extension of the rules to make sure the building and the apartments/offices build follow a standard set of rules too. Now, in turn you will have people leasing or buying the apartments/offices, they will be the keepers of that space.  So, now we are building sites, we have moved from having the building (farm) ready, to leasing/selling offices/apartments (site collections).  There will be someone assuming responsibility for those offices, that person will authorize or be informed about activities and also who not only gets a code into the building, but perhaps a key to the office.  Enter Site Collection Administrator.  And then perhaps we move on to the person that would be responsible for specifics within the office, for example a Human Resources Manager or Coordinator.  They will have specific control and knowledge about people.  A facilities coordinator, and so on.  I would translate that into Site Administrators. With that said then, we identify the following: Role Name Responsibility (but not limited to) Farm Administrator Infrastructure Site Collection Admin Policies for Content, Hierarchy, Recycle Bin, Security and Access Site Owner (Site Admin) Security and Access, Training, Guidance, Manage Templates All in all there are different levels of responsibility to be handled, but it is very important to understand what they are and what they mean. Here is a link to very well laid out explanation on this … http://www.endusersharepoint.com/2009/08/11/site-managers-and-end-user-expectations-roles-and-responsibilities/

    Read the article

  • Good DBAs Do Baselines

    - by Louis Davidson
    One morning, you wake up and feel funny. You can’t quite put your finger on it, but something isn’t quite right. What now? Unless you happen to be a hypochondriac, you likely drag yourself out of bed, get on with the day and gather more “evidence”. You check your symptoms over the next few days; do you feel the same, better, worse? If better, then great, it was some temporal issue, perhaps caused by an allergic reaction to some suspiciously spicy chicken. If the same or worse then you go to the doctor for some health advice, but armed with some data to share, and having ruled out certain possible causes that are fixed with a bit of rest and perhaps an antacid. Whether you realize it or not, in comparing how you feel one day to the next, you have taken baseline measurements. In much the same way, a DBA uses baselines to gauge the gauge health of their database servers. Of course, while SQL Server is very willing to share data regarding its health and activities, it has almost no idea of the difference between good and bad. Over time, experienced DBAs develop “mental” baselines with which they can gauge the health of their servers almost as easily as their own body. They accumulate knowledge of the daily, natural state of each part of their database system, and so know instinctively when one of their databases “feels funny”. Equally, they know when an “issue” is just a passing tremor. They see their SQL Server with all of its four CPU cores running close 100% and don’t panic anymore. Why? It’s 5PM and every day the same thing occurs when the end-of-day reports, which are very CPU intensive, are running. Equally, they know when they need to respond in earnest when it is the first time they have heard about an issue, even if it has been happening every day. Nevertheless, no DBA can retain mental baselines for every characteristic of their systems, so we need to collect physical baselines too. In my experience, surprisingly few DBAs do this very well. Part of the problem is that SQL Server provides a lot of instrumentation. If you look, you will find an almost overwhelming amount of data regarding user activity on your SQL Server instances, and use and abuse of the available CPU, I/O and memory. It seems like a huge task even to work out which data you need to collect, let alone start collecting it on a regular basis, managing its storage over time, and performing detailed comparative analysis. However, without baselines, though, it is very difficult to pinpoint what ails a server, just by looking at a single snapshot of the data, or to spot retrospectively what caused the problem by examining aggregated data for the server, collected over many months. It isn’t as hard as you think to get started. You’ve probably already established some troubleshooting queries of the type SELECT Value FROM SomeSystemTableOrView. Capturing a set of baseline values for such a query can be as easy as changing it as follows: INSERT into BaseLine.SomeSystemTable (value, captureTime) SELECT Value, SYSDATETIME() FROM SomeSystemTableOrView; Of course, there are monitoring tools that will collect and manage this baseline data for you, automatically, and allow you to perform comparison of metrics over different periods. However, to get yourself started and to prove to yourself (or perhaps the person who writes the checks for tools) the value of baselines, stick something similar to the above query into an agent job, running every hour or so, and you are on your way with no excuses! Then, the next time you investigate a slow server, and see x open transactions, y users logged in, and z rows added per hour in the Orders table, compare to your baselines and see immediately what, if anything, has changed!

    Read the article

  • Taking HRMS to the Cloud to Simplify Human Resources Management

    - by HCM-Oracle
    By Anke Mogannam With human capital management (HCM) a top-of-mind issue for executives in every industry, human resources (HR) organizations are poised to have their day in the sun—proving not just their administrative worth but their strategic value as well.  To make good on that promise, however, HR must modernize. Indeed, if HR is to act as an agent of change—providing the swift reallocation of employees  and the rapid absorption of employee data required for enterprises to shift course on a dime—it must first deal with the disruptive change at its own front door. And increasingly, that means choosing the right technology and human resources management system (HRMS) for managing the entire employee lifecycle. Unfortunately, for most organizations, this task has proved easier said than done. This is because while much has been written about advances in HRMS technology, until recently, most of those advances took the form of disparate on-premises solutions designed to serve very specific purposes. Although this may have resulted in key competencies in certain areas, it also meant that processes for core HR functions like payroll and benefits were being carried out in separate systems from those used for talent management, workforce optimization, training, and so on. With no integration—and no single system of record—processes were disconnected, ease of use was impeded, user experience was diminished, and vital data was left untapped.  Today, however, that scenario has begun to change, and end-to-end cloud-based HCM solutions have moved from wished-for innovations to real-life solutions. Why, then, have HR organizations been so slow in adopting them? The answer—it would seem—is, “It’s complicated.” So complicated, in fact, that 45 percent of the respondents to PwC’s “Annual HR Technology Survey” (for 2013) reported having no formal HR software roadmap, and 40 percent stated that they “did not know” whether their organizations would be increasing their use of cloud or software as a service (SaaS) for HR.  Clearly, HR organizations need help sorting through the morass of HR software options confronting them. But just as clearly, there’s an enormous opportunity awaiting those that do. The trick will come in charting a course that allows HR to leverage existing technology while investing in the cloud-based solutions that will deliver the end-to-end processes, easy-to-understand analytics, and superior adaptability required to simplify—and add value to—every aspect of employee management. The Opportunity therefore is to cut costs, drive Innovation, and increase engagement by moving to cloud-based HCM.  Then you will benefit from one Interface, leverage many access points, and  gain at-a-glance insight across your entire workforce. With many legacy on-premises HR systems not being efficient anymore and cloud-based, integrated systems that span the range of HR functions finally reaching maturity, the time is ripe for moving core HR to the cloud. Indeed, for the first time ever there are more HRMS replacement initiatives than HRMS upgrade initiatives under way, and the majority of them involve moving to the cloud per Cedar Crestone’s 2013-2014 HRMS survey. To learn how you can launch your own cloud HCM initiative and begin using HR to power the enterprise, visit Oracle HRMS in the Cloud and Oracle’s new customer 2 cloud program. Anke Mogannam brings more than 16 years of marketing and human capital management experience in the technology industries to her role at Oracle where she is part of the Human Capital Management applications marketing team. In that role, Anke drives content marketing, messaging, go-to-market activities, integrated marketing campaigns, and field enablement. Prior to joining Oracle, Anke held several roles in communications, marketing, HCM product strategy and product management at PeopleSoft, SAP, Workday and Saba. Follow her on Twitter @amogannam

    Read the article

  • What 5 things should SQL Server get rid of?

    - by BuckWoody
    I’ve been “tagged” by my friend Paul Randal. It’s a high-tech way of making someone else do what you want, but since it’s Paul, well, I guess I’m OK with that. He’s asked in his recent blog entry “What five things would you get rid of in SQL Server if you were in charge?” This is, of course, a delicate issue. After all, I work at Microsoft, so anything I say here might be taken as a criticism that would require action – but of course it really doesn’t. Interestingly, you may have more to do with what goes in to SQL Server than I did even as a Program Manager where I “owned” a feature. Unlike many places I’ve worked, Microsoft really does drive its products by what its users want – not every time, and not every user request, mind you, but overall I think we hit the mark pretty well. So, with all of that said, and of course the obligatory statement of “these are my own opinions, and have nothing to do with any official Microsoft position in any way, and do not reflect the opinions of other Microsoft employees or management”, here goes. 1. Get rid of SQL Server Management Studio Does that surprise you? After all, when I was a Program Manager, I actually owned the general architecture for SSMS. But those on my team probably would have been able to guess this one for you. I think that SSMS is a fine development tool. But I think that it does less of a good job for managing a system. It’s based on Visual Studio, probably one of the best development IDE’s around. And when I develop code, I really like it. But for a monitoring/management tool, I prefer a snap-in to the Microsoft Management Console (MMC). I know, the old one (prior to 3.0) was kludgy, difficult to use and program in. But that’s changed. Of course, when I bring this up, you’ll probably immediately say “But I don’t have that in XP.” And that’s one of the reasons we didn’t go there. (But I still don’t like SSMS for management.) 2. ShrinkDB I think this discussion has been done to death, so I’ll leave it at that. 3. SQL Server Agent Does that one surprise you as well? In my mind, since we ALWAYS ride on Windows, just use the task scheduler there, along with PowerShell. You could log the results in Windows logs, files, back into SQL Server, whatever. It’s just a complexity we don’t need in SQL Server. 4. SQL Server Error Logs We have a full logging setup in Windows. They’re well done, easy to understand and ubiquitous. We should just use that. 5. Several SKU’s I won’t say which, but we have a few SKU’s of SQL Server that need to go. And we need to figure out how to help you understand clearly where you need to go to Enterprise or Data Center.  Most folks are trying to push Standard edition to do things it isn’t designed to do, and then they think SQL Server won’t scale. I think we can do a better job of showing you where Standard Edition will hit the wall, and I think with fewer choices it would be pretty simple for you to pick the right one. Well, once again I’ve probably puzzled some folks and angered others. I think my work here is done. :) Back to you, Paul. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Perfect End to a Bad Day

    - by TehGrumpyCoder
    Yesterday's post about A Bad Day at Work actually had an addendum to it. There were apparently a bunch of guys on ice skates last night competing in some sport way the hell and gone over on the other side of the valley, and enough people couldn't live without seeing them that they had all major arteries heading west honked. I mean honked... the traffic guy reported the 101 had 16 miles of backup... yikes. Since I worked downtown for a number of years, my fallback is to cut across the city on surface streets to get to one of my old 'haunts' and just drive it home from there. Of course with the 101 backed up, then I17 would logically be as well, so I kept the news on rather than my Zune and heard where the bad stuff was going North. I popped out on the freeway about 7 miles south of my exit. Got to the exit which is about a mile from the house without killing or maiming me or anyone else. Waited patiently at the light in the inside lane to make a left and go under the freeway proceeding West. The light changed, I had full green, I started through and whoa... I've got someone in a little rat car crossing my bow! A little explanation... I drive a 3/4 ton pickup with a V-10, extended cab and shell on the back. It's not jacked up, but it sits up pretty good and is longer than any parking place I've ever tried to put it into. I consider this truck to be the consolation prize for paying uninsured motorist coverage for 45 years and having Pilar Martinez totally destroy a 3/4 ton Silverado on March 1, 2007 by plowing into me at traffic speed while I was stopped at a light. If you pay for uninsured motorist coverage, ask your insurance agent *exactly* what that means... I bet it's different than what you think it means. But I digress, sorry... So here I am with a car that is shorter from top to road than the hood on my truck, and the driver thought it would be safe to run a red light and see if they could get past me before I got into the lane. The right side of my front bumper was almost into the driver's window when I hit the brakes and wheeled it left. Fortunately for all involved, I saw it soon enough, and pulled into the 2nd lane for making a left to go back South. I looked in my mirror, signalled a move, then moved over behind the yuck in the rat car. I then punched it, and the future hood ornament and I both made it through the next light. I pulled alongside to let her know that she was DEFINITELY Number 1 in my book, and it's a middle-age woman looking at me with a "sorry, it was an accident" show of pouty face and arms held up. Tough $hit lady... that may have worked when you were 18, but it's not working anymore, and it wasn't an accident... you ran a freakin' red light and almost got yourself killed. That just about put a bow on the day... I was home later than usual, pissed off about work stuff, pissed off at traffic, and now that. I ate dinner, watched a little TV, and was asleep about 9:30 exhausted. Hope today is better.

    Read the article

  • Consumer Oriented Search In Oracle Endeca Information Discovery - Part 2

    - by Bob Zurek
    As discussed in my last blog posting on this topic, Information Discovery, a core capability of the Oracle Endeca Information Discovery solution enables businesses to search, discover and navigate through a wide variety of big data including structured, unstructured and semi-structured data. With search as a core advanced capabilities of our product it is important to understand some of the key differences and capabilities in the underlying data store of Oracle Endeca Information Discovery and that is our Endeca Server. In the last post on this subject, we talked about Exploratory Search capabilities along with support for cascading relevance. Additional search capabilities in the Endeca Server, which differentiate from simple keyword based "search boxes" in other Information Discovery products also include: The Endeca Server Supports Set Search.  The Endeca Server is organized around set retrieval, which means that it looks at groups of results (all the documents that match a search), as well as the relationship of each individual result to the set. Other approaches only compute the relevance of a document by comparing the document to the search query – not by comparing the document to all the others. For example, a search for “U.S.” in another approach might match to the title of a document and get a high ranking. But what if it were a collection of government documents in which “U.S.” appeared in many titles, making that clue less meaningful? A set analysis would reveal this and be used to adjust relevance accordingly. The Endeca Server Supports Second-Order Relvance. Unlike simple search interfaces in traditional BI tools, which provide limited relevance ranking, such as a list of results based on key word matching, Endeca enables users to determine the most salient terms to divide up the result. Determining this second-order relevance is the key to providing effective guidance. Support for Queries and Filters. Search is the most common query type, but hardly complete, and users need to express a wide range of queries. Oracle Endeca Information Discovery also includes navigation, interactive visualizations, analytics, range filters, geospatial filters, and other query types that are more commonly associated with BI tools. Unlike other approaches, these queries operate across structured, semi-structured and unstructured content stored in the Endeca Server. Furthermore, this set is easily extensible because the core engine allows for pluggable features to be added. Like a search engine, queries are answered with a results list, ranked to put the most likely matches first. Unlike “black box” relevance solutions, which generalize one strategy for everyone, we believe that optimal relevance strategies vary across domains. Therefore, it provides line-of-business owners with a set of relevance modules that let them tune the best results based on their content. The Endeca Server query result sets are summarized, which gives users guidance on how to refine and explore further. Summaries include Guided Navigation® (a form of faceted search), maps, charts, graphs, tag clouds, concept clusters, and clarification dialogs. Users don’t explicitly ask for these summaries; Oracle Endeca Information Discovery analytic applications provide the right ones, based on configurable controls and rules. For example, the analytic application might guide a procurement agent filtering for in-stock parts by visualizing the results on a map and calculating their average fulfillment time. Furthermore, the user can interact with summaries and filters without resorting to writing complex SQL queries. The user can simply just click to add filters. Within Oracle Endeca Information Discovery, all parts of the summaries are clickable and searchable. We are living in a search driven society where business users really seem to enjoy entering information into a search box. We do this everyday as consumers and therefore, we have gotten used to looking for that box. However, the key to getting the right results is to guide that user in a way that provides additional Discovery, beyond what they may have anticipated. This is why these important and advanced features of search inside the Endeca Server have been so important. They have helped to guide our great customers to success. 

    Read the article

  • Server-Sent Events using GlassFish (TOTD #179)

    - by arungupta
    Bhakti blogged about Server-Sent Events on GlassFish and I've been planning to try it out for past some days. Finally, I took some time out today to learn about it and build a simplistic example showcasing the touch points. Server-Sent Events is developed as part of HTML5 specification and provides push notifications from a server to a browser client in the form of DOM events. It is defined as a cross-browser JavaScript API called EventSource. The client creates an EventSource by requesting a particular URL and registers an onmessage event listener to receive the event notifications. This can be done as shown var url = 'http://' + document.location.host + '/glassfish-sse/simple';eventSource = new EventSource(url);eventSource.onmessage = function (event) { var theParagraph = document.createElement('p'); theParagraph.innerHTML = event.data.toString(); document.body.appendChild(theParagraph);} This code subscribes to a URL, receives the data in the event listener, adds it to a HTML paragraph element, and displays it in the document. This is where you'll parse JSON and other processing to display if some other data format is received from the URL. The URL to which the EventSource is subscribed to is updated on the server side and there are multipe ways to do that. GlassFish 4.0 provide support for Server-Sent Events and it can be achieved registering a handler as shown below: @ServerSentEvent("/simple")public class MySimpleHandler extends ServerSentEventHandler { public void sendMessage(String data) { try { connection.sendMessage(data); } catch (IOException ex) { . . . } }} And then events can be sent to this handler using a singleton session bean as shown: @Startup@Statelesspublic class SimpleEvent { @Inject @ServerSentEventContext("/simple") ServerSentEventHandlerContext<MySimpleHandler> simpleHandlers; @Schedule(hour="*", minute="*", second="*/10") public void sendDate() { for(MySimpleHandler handler : simpleHandlers.getHandlers()) { handler.sendMessage(new Date().toString()); } }} This stateless session bean injects ServerSentEventHandlers listening on "/simple" path. Note, there may be multiple handlers listening on this path. The sendDate method triggers every 10 seconds and send the current timestamp to all the handlers. The client side browser simply displays the string. The HTTP request headers look like: Accept: text/event-streamAccept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3Accept-Encoding: gzip,deflate,sdchAccept-Language: en-US,en;q=0.8Cache-Control: no-cacheConnection: keep-aliveCookie: JSESSIONID=97ff28773ea6a085e11131acf47bHost: localhost:8080Referer: http://localhost:8080/glassfish-sse/faces/index2.xhtmlUser-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_3) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.54 Safari/536.5 And the response headers as: Content-Type: text/event-streamDate: Thu, 14 Jun 2012 21:16:10 GMTServer: GlassFish Server Open Source Edition 4.0Transfer-Encoding: chunkedX-Powered-By: Servlet/3.0 JSP/2.2 (GlassFish Server Open Source Edition 4.0 Java/Apple Inc./1.6) Notice, the MIME type of the messages from server to the client is text/event-stream and that is defined by the specification. The code in Bhakti's blog can be further simplified by using the recently-introduced Twitter API for Java as shown below: @Schedule(hour="*", minute="*", second="*/10") public void sendTweets() { for(MyTwitterHandler handler : twitterHandler.getHandlers()) { String result = twitter.search("glassfish", String.class); handler.sendMessage(result); }} The complete source explained in this blog can be downloaded here and tried on GlassFish 4.0 build 34. The latest promoted build can be downloaded from here and the complete source code for the API and implementation is here. I tried this sample on Chrome Version 19.0.1084.54 on Mac OS X 10.7.3.

    Read the article

  • Asynchrony in C# 5 (Part II)

    - by javarg
    This article is a continuation of the series of asynchronous features included in the new Async CTP preview for next versions of C# and VB. Check out Part I for more information. So, let’s continue with TPL Dataflow: Asynchronous functions TPL Dataflow Task based asynchronous Pattern Part II: TPL Dataflow Definition (by quote of Async CTP doc): “TPL Dataflow (TDF) is a new .NET library for building concurrent applications. It promotes actor/agent-oriented designs through primitives for in-process message passing, dataflow, and pipelining. TDF builds upon the APIs and scheduling infrastructure provided by the Task Parallel Library (TPL) in .NET 4, and integrates with the language support for asynchrony provided by C#, Visual Basic, and F#.” This means: data manipulation processed asynchronously. “TPL Dataflow is focused on providing building blocks for message passing and parallelizing CPU- and I/O-intensive applications”. Data manipulation is another hot area when designing asynchronous and parallel applications: how do you sync data access in a parallel environment? how do you avoid concurrency issues? how do you notify when data is available? how do you control how much data is waiting to be consumed? etc.  Dataflow Blocks TDF provides data and action processing blocks. Imagine having preconfigured data processing pipelines to choose from, depending on the type of behavior you want. The most basic block is the BufferBlock<T>, which provides an storage for some kind of data (instances of <T>). So, let’s review data processing blocks available. Blocks a categorized into three groups: Buffering Blocks Executor Blocks Joining Blocks Think of them as electronic circuitry components :).. 1. BufferBlock<T>: it is a FIFO (First in First Out) queue. You can Post data to it and then Receive it synchronously or asynchronously. It synchronizes data consumption for only one receiver at a time (you can have many receivers but only one will actually process it). 2. BroadcastBlock<T>: same FIFO queue for messages (instances of <T>) but link the receiving event to all consumers (it makes the data available for consumption to N number of consumers). The developer can provide a function to make a copy of the data if necessary. 3. WriteOnceBlock<T>: it stores only one value and once it’s been set, it can never be replaced or overwritten again (immutable after being set). As with BroadcastBlock<T>, all consumers can obtain a copy of the value. 4. ActionBlock<TInput>: this executor block allows us to define an operation to be executed when posting data to the queue. Thus, we must pass in a delegate/lambda when creating the block. Posting data will result in an execution of the delegate for each data in the queue. You could also specify how many parallel executions to allow (degree of parallelism). 5. TransformBlock<TInput, TOutput>: this is an executor block designed to transform each input, that is way it defines an output parameter. It ensures messages are processed and delivered in order. 6. TransformManyBlock<TInput, TOutput>: similar to TransformBlock but produces one or more outputs from each input. 7. BatchBlock<T>: combines N single items into one batch item (it buffers and batches inputs). 8. JoinBlock<T1, T2, …>: it generates tuples from all inputs (it aggregates inputs). Inputs could be of any type you want (T1, T2, etc.). 9. BatchJoinBlock<T1, T2, …>: aggregates tuples of collections. It generates collections for each type of input and then creates a tuple to contain each collection (Tuple<IList<T1>, IList<T2>>). Next time I will show some examples of usage for each TDF block. * Images taken from Microsoft’s Async CTP documentation.

    Read the article

  • Get More From Your Service Request

    - by Get Proactive Customer Adoption Team
    Leveraging Service Request Best Practices Use best practices to get there faster. In the daily conversations I have with customers, they sometimes express frustration over their Service Requests. They often feel powerless to make needed changes, so their sense of frustration grows. To help you avoid some of the frustration you might feel in dealing with your Service Requests (SR), here are a few pointers that come from our best practice discussions. Be proactive. If you can anticipate some of the questions that Support will ask, or the information they may need, try to provide this up front, when you log the SR. This could be output from the Remote Diagnostic Agent (RDA), if this is a database issue, or the output from another diagnostic tool, if you’re an EBS customer. Any information you can supply that helps us understand the situation better, helps us resolve the issue sooner. As you use some of these tools proactively, you might even find the solution to the problem before you log an SR! Be right. Make sure you have the correct severity level. Since you select the initial severity level, it’s easy to accept the default without considering how significant this may be. Business impact is the driving factor, so make sure you take a moment to select the severity level that is appropriate to the situation. Also, make sure you ask us to change the severity level, should the situation dictate. Be responsive! If this is an important issue to you, quickly follow up on any action plan submitted to you by Oracle Support. The support engineer assigned to your Service Request will be able to move the issue forward more aggressively when they have the needed information. This is crucial in resolving your issues in a timely manner. Be thorough. If there are five questions in the action plan, make sure you provide an answer for all five questions in one response, rather than trickling them in one at a time. This will allow the engineer to look at all of the information as a whole and to avoid multiple trips to your SR, saving valuable time and getting you a resolution sooner. Be your own advocate! You know your situation best; make sure Oracle Support understands both how and why this issue is important to you and your company. Use the escalation process if you're concerned that your SR isn't going the right direction, the right pace, or through the right person. Don't wait until you're frustrated and angry. An escalation is as simple as a quick conversation on the phone and can be amazingly effective in getting your issues back on track. The support manager you speak with is empowered to make any needed changes. Be our partner. You can make your support experience better. When your SR has been resolved, you may receive a survey request. This is intended to get your feedback about how your SR went and what we can do to improve your overall support experience. Oracle Support is here to help you. Our goal with any Service Request is to provide the best possible solution as quickly as possible. With your help, we’ll be able to do this with your Service Request too.  

    Read the article

  • The endpoint reference (EPR) for the Operation not found is

    - by Denise Wu
    I have been struggling with the following error the last couple of days can you please help! I generated my server and client code using the wsdl2java tool from a wsdl 2.0 file. When invoking the webservice I am getting the following error: org.apache.axis2.AxisFault: The endpoint reference (EPR) for the Operation not found is /axis2/services/MyService/authentication/?username=Denise345&password=xxxxx and the WSA Action = null My service is displayed on the axis2 webpage with all available methods. Here is the output from TcpMon ============== Listen Port: 8090 Target Host: 127.0.0.1 Target Port: 8080 ==== Request ==== GET /axis2/services/MyService/authentication/?username=Denise345&password=xxxxx HTTP/1.1 Content-Type: application/x-www-form-urlencoded; charset=UTF-8 SOAPAction: "" User-Agent: Axis2 Host: 127.0.0.1:8090 ==== Response ==== HTTP/1.1 500 Internal Server Error Server: Apache-Coyote/1.1 Content-Type: application/xml;charset=UTF-8 Transfer-Encoding: chunked Date: Thu, 12 May 2011 15:53:20 GMT Connection: close 12b <soapenv:Reason xmlns:soapenv="http://www.w3.org/2003/05/soap-envelope"> <soapenv:Text xml:lang="en-US">The endpoint reference (EPR) for the Operation not found is /axis2/services/MyService/authentication/?username=Denise345&password=xxxxx and the WSA Action = null</soapenv:Text></soapenv:Reason> 0 ============== I am using: * axis2-1.5.4 * Tomcat 7.0.8 * wsdl 2.0 file Please help!

    Read the article

  • Selenium RC selenium-testrunner.js Access denied error on IEProxy - Help??

    - by melaos
    Webpage error details User Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 1.1.4322; .NET CLR 2.0.50727; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; .NET4.0C; .NET4.0E) Timestamp: Wed, 28 Apr 2010 02:07:17 UTC Message: Access is denied. Line: 177 Char: 9 Code: 0 URI: http://www.google.com/selenium-server/core/scripts/selenium-testrunner.js Hi guys, i'm just starting to learn up on selenium and while testing using mostly test cases and test suite creating using selenium IDE firefox, i'm having some problem getting it to work properly in internet explorer. this is the cmd line that i'm using: java -jar "selenium-server.jar" -htmlSuite *iexploreproxy "http://www.google.com/" tests/OR_Discount_UAT_Suite.htm results.html -userExtensions user-extensions.js i try using the *iexplore but kept getting session id expired error and try with the proxy version instead. i can now see the testrunner but keep getting the access denied error. i then try the same cmd line using firefox: java -jar "selenium-server.jar" -htmlSuite *firefox3 "http://www.google.com/" tests/OR_Discount_UAT_Suite.htm results.html -userExtensions user-extensions.js FYI, i've already unchecked the auto detect proxy setting in IE8. and i can get everything running perfectly. so im not sure what's the problem right now :( anybody can help? thanks!

    Read the article

  • TeamCity stopped working once I added NUnit to the mix

    - by Dave
    I'm struggling a lot trying to get our build server going. I am currently running tests in a Windows XP virtual machine, and have installed TeamCity v5.0.3, build 10821. I am using NUnit v2.5.3. I finished the initial setup with TeamCity without any issues at all, provided that I use the sln2008 build runner that makes the entire process almost brainless. It's really quite nice that way, and very satisfying to see your first successful automated build. Now it's time to kick it up a notch and I wanted to get NUnit working. I keep the NUnit 2.5.3 assemblies in an external libs folder in SVN, so I checked that out onto the test system. I selected NUnit 2.5.3 from the build runner options, as the online instructions had recommended. But when I build, I get the following error: Window1.xaml.cs(14,7): error CS0246: The type or namespace name ‘NUnit’ could not be found (are you missing a using directive or an assembly reference?) Window1.xaml.cs(28,10): error CS0246: The type or namespace name ‘Test’ could not be found (are you missing a using directive or an assembly reference?) Window1.xaml.cs(28,10): error CS0246: The type or namespace name ‘TestAttribute’ could not be found (are you missing a using directive or an assembly reference?) Everything compiles great in the IDE. From finding blog posts and submitting comments, I got some advice and confirmed the following: I have the HintPath value set properly in my project file (points to the external lib) I can also do a full Release and Debug build from the command line using msbuild I have tried do use the NUnit installer so nunit.framework.dll gets registered into the GAC I have changed the build agent's logon account to be a user on the test system, rather than LOCAL SYSTEM. Nothing seems to help... can anyone else here offer me some advice on what to try next?

    Read the article

  • Uncompress GZIPed HTTP Response in Java

    - by bill0ute
    Hi, I'm trying to uncompress a GZIPed HTTP Response by using GZIPInputStream. However I always have the same exception when I try to read the stream : java.util.zip.ZipException: invalid bit length repeat My HTTP Request Header: GET www.myurl.com HTTP/1.0\r\n User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; fr; rv:1.9.2) Gecko/20100115 Firefox/3.6\r\n Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8\r\n Accept-Language: fr,fr-fr;q=0.8,en-us;q=0.5,en;q=0.3\r\n Accept-Encoding: gzip,deflate\r\n Accept-Charset: ISO-8859-1,UTF-8;q=0.7,*;q=0.7\r\n Keep-Alive: 115\r\n Connection: keep-alive\r\n X-Requested-With: XMLHttpRequest\r\n Cookie: Some Cookies\r\n\r\n At the end of the HTTP Response header, I get path=/Content-Encoding: gzip, followed by the gziped response. I tried 2 similars codes to uncompress : GZIPInputStream gzip = new GZIPInputStream (new ByteArrayInputStream (tBytes)); StringBuffer szBuffer = new StringBuffer (); byte tByte [] = new byte [1024]; while (true) { int iLength = gzip.read (tByte, 0, 1024); // <-- Error comes here if (iLength < 0) break; szBuffer.append (new String (tByte, 0, iLength)); } And this one that I get on this forum : InputStream gzipStream = new GZIPInputStream (new ByteArrayInputStream (tBytes)); Reader decoder = new InputStreamReader (gzipStream, "UTF-8");//<- I tried ISO-8859-1 and get the same exception BufferedReader buffered = new BufferedReader (decoder); I guess this is an encoding error. Best regards, bill0ute

    Read the article

  • Stresstesting ASP.NET/IIS with WCAT

    - by MartinHN
    I'm trying to setup a stress/load test using the WCAT toolkit included in the IIS Resources. Using LogParser, I've processed a UBR file with configuration. It looks something like this: [Configuration] NumClientMachines: 1 # number of distinct client machines to use NumClientThreads: 100 # number of threads per machine AsynchronousWait: TRUE # asynchronous wait for think and delay Duration: 5m # length of experiment (m = minutes, s = seconds) MaxRecvBuffer: 8192K # suggested maximum received buffer ThinkTime: 0s # maximum think-time before next request WarmupTime: 5s # time to warm up before taking statistics CooldownTime: 6s # time to cool down at the end of the experiment [Performance] [Script] SET RequestHeader = "Accept: */*\r\n" APP RequestHeader = "Accept-Language: en-us\r\n" APP RequestHeader = "User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.2; .NET CLR 1.0.3705)\r\n" APP RequestHeader = "Host: %HOST%\r\n" NEW TRANSACTION classId = 1 NEW REQUEST HTTP ResponseStatusCode = 200 Weight = 45117 verb = "GET" URL = "http://Url1.com" NEW TRANSACTION classId = 3 NEW REQUEST HTTP ResponseStatusCode = 200 Weight = 13662 verb = "GET" URL = "http://Url1.com/test.aspx" Does it look OK? I execute the controller with this command: wcctl -z StressTest.ubr -a localhost The Client(s) is executed like this: wcclient localhost When the client is executed, I get this error: main client thread Connect Attempt 0 Failed. Error = 10061 Has anyone in this world ever used WCAT?

    Read the article

  • Parsing with BeautifulSoup, error message TypeError: coercing to Unicode: need string or buffer, NoneType found

    - by Samsun Knight
    so I'm trying to scrape an Amazon page for data, and I'm getting an error when I try to parse for where the seller is located. Here's my code: #getting the html request = urllib2.Request('http://www.amazon.com/gp/offer-listing/0393934241/') opener = urllib2.build_opener() #hiding that I'm a webscraper request.add_header('User-Agent', 'Mozilla/5 (Solaris 10) Gecko') #opening it up, putting into soup form html = opener.open(request).read() soup = BeautifulSoup(html, "html5lib") #parsing for the seller info sellers = soup.findAll('div', {'class' : 'a-row a-spacing-medium olpOffer'}) for eachseller in sellers: #parsing for price price = eachseller.find('span', {'class' : 'a-size-large a-color-price olpOfferPrice a-text-bold'}) #parsing for shipping costs shippingprice = eachseller.find('span' , {'class' : 'olpShippingPrice'}) #parsing for condition condition = eachseller.find('span', {'class' : 'a-size-medium'}) #parsing for seller name sellername = eachseller.find('b') #parsing for seller location location = eachseller.find('div', {'class' : 'olpAvailability'}) #printing it all out print "price, " + price.string + ", shipping price, " + shippingprice.string + ", condition," + condition.string + ", seller name, " + sellername.string + ", location, " + location.string I get the error message, pertaining to the 'print' command at the end, "TypeError: coercing to Unicode: need string or buffer, NoneType found" I know that it's coming from this line - location = eachseller.find('div', {'class' : 'olpAvailability'}) - because the code works fine without that line, and I know that I'm getting NoneType because the line isn't finding anything. Here's the html from the section I'm looking to parse: <*div class="olpAvailability"> In Stock. Ships from WI, United States. <*br/><*a href="/gp/aag/details/ref=olp_merch_ship_9/175-0430757-3801038?ie=UTF8&amp;asin=0393934241&amp;seller=A1W2IX7T37FAMZ&amp;sshmPath=shipping-rates#aag_shipping">Domestic shipping rates</a> and <*a href="/gp/aag/details/ref=olp_merch_return_9/175-0430757-3801038?ie=UTF8&amp;asin=0393934241&amp;seller=A1W2IX7T37FAMZ&amp;sshmPath=returns#aag_returns">return policy</a>. <*/div> (but without the stars - just making sure the HTML doesn't compile out of code form) I don't see what's the problem with the 'location' line of code, or why it's not pulling the data I want. Help?

    Read the article

  • Remoting server forcibly closing client connections in the middle of remote calls

    - by Carsten Hess
    Hello everyone. I have a system consisting of a server accepting remoting calls from clients with TCP as the underlying transportlayer. It normally works like a charm, but if I increase the no. of clients, the server starts at random to close the TCP connections in the middle of the calls. Not all calls are interrupted this way. That is really unexpected behaviour... I get no exceptions on the server side, just the client side exception: System.Net.Sockets.SocketException: An existing connection was forcibly closed by the remote host Server stack trace: ved System.Net.Sockets.Socket.Receive(Byte[] buffer, Int32 offset, Int32 size, SocketFlags socketFlags) ved System.Runtime.Remoting.Channels.SocketStream.Read(Byte[] buffer, Int32 offset, Int32 size) ved System.Runtime.Remoting.Channels.SocketHandler.ReadFromSocket(Byte[] buffer, Int32 offset, Int32 count) ved System.Runtime.Remoting.Channels.SocketHandler.Read(Byte[] buffer, Int32 offset, Int32 count) ved System.Runtime.Remoting.Channels.SocketHandler.ReadAndMatchFourBytes(Byte[] buffer) ved System.Runtime.Remoting.Channels.Tcp.TcpSocketHandler.ReadAndMatchPreamble() ved System.Runtime.Remoting.Channels.Tcp.TcpSocketHandler.ReadVersionAndOperation(UInt16& operation) ved System.Runtime.Remoting.Channels.Tcp.TcpClientSocketHandler.ReadHeaders() ved System.Runtime.Remoting.Channels.Tcp.TcpClientTransportSink.ProcessMessage(IMessage msg, ITransportHeaders requestHeaders, Stream requestStream, ITransportHeaders& responseHeaders, Stream& responseStream) ved System.Runtime.Remoting.Channels.BinaryClientFormatterSink.SyncProcessMessage(IMessage msg) Exception rethrown at [0]: ved System.Runtime.Remoting.Proxies.RealProxy.HandleReturnMessage(IMessage reqMsg, IMessage retMsg) ved System.Runtime.Remoting.Proxies.RealProxy.PrivateInvoke(MessageData& msgData, Int32 type) ved EBH.GuG.AgentKit.Transports.RemotingAgentHostEndPoint.SyncInvoke(Agent a, Int32 port)

    Read the article

  • eclipse tomcat debug mode slow - pegs cpu

    - by andersonbd1
    Running Tomcat through eclipse works fine in non-debug mode, but not in debug mode. When I try to start the Tomcat server in debug mode, the console output looks fine for a while, but then starts slowing down and eventually just stops, pegging the cpu at 100%. I don't think it's relevant, but just in case - here's the console output right about when it starts slowing down and eventually stopping (by stopping I mean no more console output, but still 100% cpu). 2009-09-02 14:35:30,859 INFO NONE org.springframework.context.weaving.DefaultContextLoadTimeWeaver:72 - Found Spring's JVM agent for instrumentation 2009-09-02 14:35:49,562 INFO NONE org.springframework.beans.factory.support.DefaultListableBeanFactory:414 - Pre-instantiating singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory@ed889d: defining beans [... 2009-09-02 14:37:31,031 INFO NONE org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean:221 - Building JPA container EntityManagerFactory for persistence unit ... I tried everything I could think of to fix it: cleanesd tomcat working directory restarted eclipse restarted Windows refreshed/cleaned all projects I first had this problem last week using eclipse ganymede. I had been running fine in debug-mode for several months prior to this issue. I didn't make any significant changes to our project that would cause this. Eventually, I upgraded to eclipse galileo which solved my problem. Now 2 days later, I'm having the same problem in galileo. Like I said it works fine in non-debug mode. Any help is much appreciated. I should add that other things work in debug mode - for instance junit tests, so it is something specific to tomcat.

    Read the article

  • Good locations worldwide for a coder gypsy wannabe

    - by fung
    Yes, this is not programming related but please bear with me =). I run a small niche SaaS business. Lately I've been thinking of traveling and experiencing life in other places. Would really appreciate suggestions for good places a developer could relocate to. In particular I'm looking for a place that: Has good internet connection (cheap stable broadband, lots of places that provide free wifi, etc.) Low cost of living (rent and food fairly cheap). At least half of the population speak English. Has a local courier agent (DHL, Fedex, any...). The government allows for extended stay of foreigners. I'm thinking of staying for about 6 months at each location and maybe doing it for 3 years. So looking for 5 to 6 locations in total. So if any of you think you're staying in a place that would be great for a visiting developer then please shout out. Include as detailed a description as possible. And include any cons about the place if there are. The only place that pops to mind right now is Bali =). Isle of Skye also seems interesting but I think immigration is tight and cost of living would definitely be higher. Thanks in advance for suggestions =)

    Read the article

  • Newbie, deciding Python or Erlang

    - by Joe
    Hi Guys, I'm a Administrator (unix, Linux and some windows apps such as Exchange) by experience and have never worked on any programming language besides C# and scripting on Bash and lately on powershell. I'm starting out as a service provider and using multiple network/server monitoring tools based on open source (nagios, opennms etc) in order to monitor them. At this moment, being inspired by a design that I came up with, to do more than what is available with the open source at this time, I would like to start programming and test some of these ideas. The requirement is that a server software that captures a stream of data and store them in a database(CouchDB or MongoDB preferably) and the client side (agent installed on a server) would be sending this stream of data on a schedule of every 10 minutes or so. For these two core ideas, I have been reading about Python and Erlang besides ruby. I do plan to use either Amazon or Rackspace where the server platform would run. This gives me the scalability needed when we have more customers with many servers. For that reason alone, I thought Erlang was a better fit(I could be totally wrong, new to this game) and I understand that Erlang has limited support in some ways compared to Ruby or Python. But also I'm totally new to the programming realm of things and any advise would be appreciated grately. Jo

    Read the article

  • LINQ-to-SQL: ExecuteQuery(Type, String) populates one field, but not the other

    - by Daniel Schaffer
    I've written an app that I use as an agent to query data from a database and automatically load it into my distributed web cache. I do this by specifying an sql query and a type in a configuration. The code that actually does the querying looks like this: List<Object> result = null; try { result = dc.ExecuteQuery(elementType, entry.Command).OfType<Object>().ToList(); } catch (Exception ex) { HandleException(ex, WebCacheAgentLogEvent.DatabaseExecutionError); continue; } elementType is a System.Type created from the type specified in the configuration (using Type.GetType()), and entry.Command is the SQL query. The specific entity type I'm having an issue with looks like this: public class FooCount { [Column(Name = "foo_id")] public Int32 FooId { get; set; } [Column(Name = "count")] public Int32 Count { get; set; } } The SQL query looks like this: select foo_id as foo_id, sum(count) as [count] from foo_aggregates group by foo_id order by foo_id For some reason, when the query is executed, the "Count" property ends up populated, but not the "FooId" property. I tried running the query myself, and the correct column names are returned, and the column names match up with what I've specified in my mapping attributes. Help!

    Read the article

  • Codeigniter + TankAuth + Swfupload not able to get the logger user id

    - by Manny Calavera
    Hello. I am using Codeigniter with the TankAuth library installed and trying to upload to index.php/reuqests/doUpload from swfupload but can't access the page as authenticated. I have read many posts around the net about similar problem and tried to set $config['sess_match_useragent'] = FALSE; but still no difference. I have ended up skipping the login check in my controller for testing purposes. But now I need to access tankAuth library from my controller to get the current logged in user ID. It is requested in my application and cannot skip it, I really need to pass the logged in user id to that doUpload model. I have setup controller like this: function doUploadFileFn() { if (!$this->tank_auth->is_logged_in()) { return; } else { $user_id = $this->tank_auth->get_user_id(); $this->load->model('requests/doUploadFile'); $this->doUploadFile->uploadData($user_id); } } Now, it does not pass the is_logged_in() check, as I learned from other posts, CI deletes the session but I have setup the config not to match the user agent but still not working. Is there any solution to this out there ? Thank you.

    Read the article

  • Parse MIME messages

    - by Abhimanyu
    Hi, For my new project which has email module.i need to show all the email information on web.when i m making a call to server i m getting the base64 encoded mime data. after applying base64 decoding technique i m getting the mime data as follows: /*****************Mime data start *******************************/ From [email protected] Tue Jun 23 12:01:02 2009 Date: Tue, 23 Jun 2009 12:01:02 +0530 From: Prashant R Naik <[email protected]> To: [email protected] Subject: This is a test mail Message-ID: <[email protected]> Reply-To: Prashant R Naik <[email protected]> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="ReaqsoxgOBHFXBhH" Content-Disposition: inline User-Agent: Mutt/1.5.18 (2008-05-17) Status: RO Content-Length: 1912 Lines: 52 --ReaqsoxgOBHFXBhH Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Test mail. Initiated by prashant Regards, -- Prashant R Naik Principal Technologist | Symbian & Web2.0 Geodesic Limited | www.geodesic.com Tel: +91-80-66551000 --ReaqsoxgOBHFXBhH Content-Type: image/gif Content-Disposition: attachment; filename="trash.gif" Content-Transfer-Encoding: base64 R0lGODlhEAAQANUoADJ8wTqU2DmR1TqV2DN9wTSBxTWFyTaGyTJ9wTWGyTaKzjmS1TOAxTuV 2DaFyTN8wDiN0jiO0jSAxTeKzjqS1DN8wTqR1TWFyjB4vTOBxTmO0TmS1DaKzTeJzTqV1zSA xDJ8wDqS1TeKzTF4vDF4vTiO0f///zuX2gAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACH5BAEAACgALAAA AAAQABAAAAaDQNRpSCwWhcakcsk8mZ5Qpik5pUKvT2W1uDVWp+BiYNAImAZmz/lcDoQEFoFp QTFtTPKFQLCAREolJiURJhCCJhqAJRMiIhwmjSYdJgqUjQoODgkJJgecBp0mBgYXBx8ZBQxY UAUSDAUACLEPDwgEAAAEIBUEtygkIyMkwMMYw8EjKEEAOw== --ReaqsoxgOBHFXBhH Content-Type: image/jpeg Content-Disposition: attachment; filename="bx.jpg" Content-Transfer-Encoding: base64 /9j/4AAQSkZJRgABAQEASABIAAD/2wBDAAEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEB AQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQH/2wBDAQEBAQEBAQEBAQEBAQEB AQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQH/wAAR CAAUAAoDAREAAhEBAxEB/8QAFQABAQAAAAAAAAAAAAAAAAAAAAn/xAAYEAEAAwEAAAAAAAAA AAAAAAAAGWen5//EABQBAQAAAAAAAAAAAAAAAAAAAAD/xAAUEQEAAAAAAAAAAAAAAAAAAAAA /9oADAMBAAIRAxEAPwCb4AJHym0Vp3PQJTaK07noJHgA/9k= --ReaqsoxgOBHFXBhH Content-Type: image/png Content-Disposition: attachment; filename="day_bg.png" Content-Transfer-Encoding: base64 iVBORw0KGgoAAAANSUhEUgAAAGQAAAApCAYAAADDJIzmAAAABmJLR0QA/wD/AP+gvaeTAAAA CXBIWXMAAAsTAAALEwEAmpwYAAAAB3RJTUUH2AwCCS0kTriU2QAAAB10RVh0Q29tbWVudABD cmVhdGVkIHdpdGggVGhlIEdJTVDvZCVuAAAAXElEQVR42u3bQQEAMAgDMZiqiZtP5AwbfeQk NO/WvPtLMR0TABEQIAICRECACAgQAREQIAICRECACAgQAREQIAICRECACAgQAREQIAICRECA CAgQARGQ7NpPPasFT+0FZPjBRwYAAAAASUVORK5CYII= --ReaqsoxgOBHFXBhH-- /*****************Mime data end *******************************/ now the problem is i have to parse this data and use it in my application.since this data is not a xml so it difficult to parse it (because parsing with some tag is easy).so any one who knows how to parse mime data help be.i m using erlang to parse this data. Thank you in advance

    Read the article

  • Having problems with uploading photos to TwitPic using OAuth in Objective C on the iPhone

    - by M. Bedi
    I have been working on an iPhone app that has a feature of uploading photos to TwitPic. I have it working with basic authentication. I am trying to get it working with OAuth. I am getting authentication errors. I have studied very carefully the TwitPic documentation. I am authorising the app by displaying a UI Web View and the it returns a PIN value. I enter the PIN value in the app and request the token. I am able to upload status updates to Twitter but not photos. My code is based on some example code from here: Example iPhone app using OAuth Here is my code: NSString *url = @"http://api.twitpic.com/2/upload.json"; NSString *oauth_header = [oAuth oAuthHeaderForMethod:@"POST" andUrl:url andParams:nil]; NSLog(@"OAuth header : %@\n\n", oauth_header); ASIFormDataRequest *request = [ASIFormDataRequest requestWithURL:[NSURL URLWithString:url]]; [request addRequestHeader:@"User-Agent" value:@"ASIHTTPRequest"]; request.requestMethod = @"POST"; [request addRequestHeader:@"X-Auth-Service-Provider" value:@"https://api.twitter.com/1/account/verify_credentials.json"]; [request addRequestHeader:@"X-Verify-Credentials-Authorization" value:oauth_header]; NSData *imageRepresentation = UIImageJPEGRepresentation(imageToUpload, 0.8); [request setData:imageRepresentation forKey:@"media"]; [request setPostValue:@"Some Message" forKey:@"message"]; [request setPostValue:TWITPIC_API_KEY forKey:@"key"]; [request setDelegate:self]; [request setDidFinishSelector:@selector(requestDone:)]; [request setDidFailSelector:@selector(requestFailed:)]; [request start]; Here is the OAuth Header: OAuth realm="http://api.twitter.com/", oauth_timestamp="1275492425", oauth_nonce="b686f20a18ba6763ac52b689b2ac0c421a9e4013", oauth_signature_method="HMAC-SHA1", oauth_consumer_key="zNbW3Xi3MuS7i9cpz6fw", oauth_version="1.0", oauth_token="147275699-jmrjpwk3B6mO2FX2BCc9Ci9CRBbBKYW1bOni2MYs", oauth_signature="d17HImz6VgygZgbcp845CD2qNnI%3D"

    Read the article

< Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >