Search Results

Search found 2853 results on 115 pages for 'april 1st'.

Page 92/115 | < Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >

  • Undeliverable e-mail message from [email protected]

    - by QGfisher
    I am responsible for IT for a small charity and we have a problem with a few individuals who e-mail us on our hosted e-mail addresses. The individual is on btconnect and our server is also on BT broadband and using MSExchange. I understand that the message from [email protected] are generated by Exchange but can't tell whether this is a problem with our server (seems unlikely as most people send and receive e-mails perfectly well) or with the sender's server. I have copied a sample test message below and would be very grateful if somebody can explain what is causing this problem. I have * the personal details - hope that's acceptable but I don't want to compromise the individual's identity/security. ----- Original Message ----- From: "System Administrator" To: "****" <****.***@btconnect.com Sent: Tuesday, April 06, 2010 3:26 PM Subject: Undeliverable: Test Message Your message To: ***** Subject: Test Message Sent: Tue, 6 Apr 2010 15:25:59 +0100 did not reach the following recipient(s): ***@quiltersguild.org.uk on Tue, 6 Apr 2010 15:26:07 +0100 The e-mail account does not exist at the organization this message was sent to. Check the e-mail address, or contact the recipient directly to find out the correct address.

    Read the article

  • Reaching to the Holy Grail of Data Management

    - by Irem Radzik
    Pervasive, continuous access to trusted data. That’s the ultimate goal of data management. It enables to leverage data as an asset to create value for customers and the organization. It creates the strong foundation needed to move the business forward. How you get there is also critical. As with all IT initiatives using high performance solutions with low cost of ownership is another key requirement in today’s IT world. Oracle's  data integration product strategy focuses on helping customers achieve this ultimate goal with high performance and low TCO.  At OpenWorld, we will be showing how Oracle Data Integration products help you reach your data management goals, considering new trends in information management, such as big data and cloud computing. We will also provide an update on the latest product releases, such as Oracle GoldenGate 11gR2. If you will be at OpenWorld, please join us on Monday Oct 1st 10:45am at Moscone West – 3005 to hear our VP of Product Development, Brad Adelberg, present "Future Strategy, Direction, and Roadmap of Oracle’s Data Integration Platform". The Data Integration track at OpenWorld covers variety of topics and speakers. In addition to product management of Oracle GoldenGate, Oracle Data Integrator, and Enteprise Data Quality presenting product updates and roadmap, we have several customer panels and stand-alone sessions featuring select customers such as St. Jude Medical, Raymond James, Aderas, Turkcell, Paychex, Comcast,  Ticketmaster, Bank of America and more. You can see an overview of Data Integration sessions here. If you are not able to attend OpenWorld, please check out our latest resources for Data Integration and Oracle GoldenGate. In the coming weeks you will see more blogs about our products’ new capabilities and what to expect at OpenWorld. I hope to see you at OpenWorld and stay in touch via our future blogs. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

  • How to deal with overly aggressive "Link Take Down Demands"?

    - by Eoin
    I've been receiving a large number of emails recently requesting I clean from link spam from my forum. Initially the emails were very polite and professional, and I was happy to remove the links. Recently the email have gotten very abrasive, here is a particularly rude example: From: [email protected] To: [email protected] Hi, This is the second time we are reaching out to you regarding your link to our site hxxp://www.company-two.com from hxxp://www.my-forum.com/some-topic-id. We really do need to remove this link. We have to report to Google any link we were unable to remove, and I wouldn't want to have to include your site in the list. Could you please remove our link from this page and any other page on your site? Thank You, Name Changed Behind the superficial pleasantries I feel there is some very real maliciousness. Note the email address, DMCA Violations, I don't see how the DMCA is involved here, except as a word which tends to strike fear in many people. Also relating to the email address, it doesn't match the company being linked to at all. How am I to trust they are truely operating on behalf of company-two when they don't even use one of it's email addresses. My email is hidden by privacypost. While a service with legitimate uses, I feel it's highly unprofessional for communications between to companies. The claim "This is the second time..." Every email I've received has started like this, but a check of my spam filters has never revealed a 1st mail. Initially I gave them the benefit of the doubt, by now though it's clear this is a cheap ploy to start me off on the defensive. And finally worst of all- the threats of reporting me to Google if I don't do everything they ask. I sent a polite reply asking for more information. I have no idea if the email address was even valid but I never received any response. Much later I got this followup mail From: [email protected] To: [email protected] Hi, This is the final time we are reaching out to you regarding your link to our site hxxp://www.company-two.com from hxxp://www.my-forum.com/some-topic-id. We will soon be reporting to Google any link we were unable to remove, and currently your site will have to be on the list. Could you please remove our link from this page and any other page on your site? I appreciate your urgent attention to this matter. Thank You, Name Changed This time the from address was more personal, though still not obviously connected to the spammed company. Lets be honest, I don't for one second believe that the companies were the victim of a 3rd party spammer as they claim. The links in questions were generated well over a year ago, and I firmly believe the companies were directly responsible for the spam links in question, a type of spam that has plagued my forum. Now they have the audacity to demand I spend my time cleaning up their mess, using threats to ensure they get their way. Have recent changes in Googles algorithms meant all the cash they spent spamming the web has now turned into a liability? If so I can see why these companies are all of a sudden running scared. Frankly, cleaning up my forum is a good things, but the threats they are using sickens me. So my question here is specifically about the threats: Are they vaild, and would such reports to Google destroy my page rankings? Is there a way I can report this abusive behaviour to Google?

    Read the article

  • Oracle ZS3 Contest for Partners: Share an unforgettable experience at the Teatro Alla Scala in Milan

    - by Claudia Caramelli-Oracle
    12.00 Dear valued Partner, We are pleased to launch a partner contest exclusive to our partners dedicated to promoting and selling Oracle Systems! You are essential to the success of Oracle and we want to recognize your contribution and effort in driving Oracle Storage to the market. To show our appreciation we are delighted to announce a contest, giving the winners the opportunity to attend a roundtable chaired by Senior Oracle Executives and spend an unforgettable evening at the magnificent Teatro Alla Scala in Milan, followed by a stay at the Grand Hotel et de Milan, courtesy of Oracle. Recognition will be given to 12 partner companies (10 VARs & 2 VADs) who will be recognized for their ZFS storage booking achievement in the broad market between June 1st and July 18th 2014. Criteria of Eligibility A minimum deal value of $30k is required for qualification Partners who are wholly or partially owned by a public sector organization are not eligible for participation  Winners The winning VARs will be: The highest ZS3 or ZBA bookings achievers by COB on July 18th, 2014 in each Oracle EMEA region (1) The highest Oracle on Oracle (2) ZS3 or ZBA bookings achievers by COB on July 18th, 2014 in each Oracle EMEA region The winning VADs (3) will be: The highest ZS3 or ZBA bookings achiever by COB on July 18th 2014 in EMEA The highest Oracle on Oracle (2) ZS3 or ZBA bookings achiever by COB on July 18th 2014 in EMEA  The Prize Winners will be invited to participate to a roundtable chaired by Oracle on Monday September 8th 2014 in Milan and to be guests of Oracle in the evening of September 8th, 2014 at the Teatro Alla Scala. The evening will comprise of a private tour of the Scala museum, cocktail reception at the elegant museum rooms and attending the performance by the renowned Soprano, Maria Agresta. Our guests will then retire for the evening to the Grand Hotel et de Milan, courtesy of Oracle. Oracle shall be the final arbiter in selecting the winners and all winners will be notified via their Oracle account manager.Full details about the contest, expenses covered by Oracle and timetable of events can be found on the Oracle EMEA Hardware (Servers & Storage) Partner Community workspace (FY15 Q1 ZFS Partner Contest). Remember: access to the community workspace requires membership. If you are not a member please register here. Good Luck!! For more information, please contact Sasan Moaveni. (1) Two VAR winners for each EMEA region – Eastern Europe & CIS, Middle East & Africa, South Europe, North Europe, UK/Ireland & Israel - as per the criteria outlined above (2) Oracle on Oracle, in this instance, means ZS3 or ZBA storage attached to DB or DB options, Engineered Systems or Sparc servers sold to the same customer by the same partner within the contest timelines.(3) Two VAD winners, one for each of the criteria outlined above, will be selected from across EMEA. Normal 0 14 false false false IT X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-fareast-language:EN-US;}

    Read the article

  • Apache config that uses two document roots based on whether the requested resource exists in the first [closed]

    - by mattalexx
    Background I have a client site that consists of a CakePHP installation and a Magento installation: /web/example.com/ /web/example.com/app/ <== CakePHP /web/example.com/app/webroot/ <== DocumentRoot /web/example.com/app/webroot/store/ <== Magento /web/example.com/config/ <== Site-wide config /web/example.com/vendors/ <== Site-wide libraries The server runs Apache 2.2.3. The problem The whole company has FTP access and got used to clogging up the /web/example.com/, /web/example.com/app/webroot/, and /web/example.com/app/webroot/store/ directories with their own files. Sometimes these files need HTTP access and sometimes they don't. In any case, this mess makes my job harder when it comes to maintaining the site. Code merges, tarring the live code, etc, is very complicated and usually requires a bunch of filters. Abandoned solution At first, I thought I would set up a new subdomain on the same server, move all of their files there, and change their FTP chroot. But that wouldn't work for these reasons: Firstly, I have no idea (and neither do they remember) what marketing materials they've sent out that contain URLs to certain resources they've uploaded to the server, using the main domain, and also using abstract subdomains that use the main virtual host because it has ServerAlias *.example.com. So suddenly having them only use static.example.com isn't feasible. Secondly, The PHP scripts in their projects are potentially very non-portable. I want their files to stay in as similar an environment as they were built as I can. Also, I do not want to debug their code to make it portable. Half-baked solution After some thought, I decided to find a way to section off the actual website files into another directory that they would not touch. The company's uploaded files would stay where they were. This would ensure that I didn't break any of their projects that needed HTTP access. It would look something like this: /web/example.com/ <== A bunch of their files are in here /web/example.com/app/webroot/ <== 1st DocumentRoot; A bunch of their files are in here /web/example.com/app/webroot/store/ <== Some more are in here /web/example.com/site/ <== New dir; Contains only site files /web/example.com/site/app/ <== CakePHP /web/example.com/site/app/webroot/ <== 2nd DocumentRoot /web/example.com/site/app/webroot/store/ <== Magento /web/example.com/site/config/ <== Site-wide config /web/example.com/site/vendors/ <== Site-wide libraries After I made this change, I would not need to pay attention to anything except for the stuff within /web/example.com/site/ and my job would be a lot easier. I would be the only one changing stuff in there. So here's where the Apache magic would happen: I need an HTTP request to http://www.example.com/ to first use /web/example.com/app/webroot/ as the document root. If nothing is found (no miscellaneous uploaded company projects are found), try finding something within /web/example.com/site/app/webroot/. Another thing to keep in mind is, the site might have some problems if the $_SERVER['DOCUMENT_ROOT'] variable reads /web/example.com/app/webroot/ but the actual files are within /web/example.com/site/app/webroot/. It would be better if the DOCUMENT_ROOT environment variable could be /web/example.com/site/app/webroot/ for anything within the /web/example.com/site/app/webroot/ directory. Conclusion Is my half-baked solution possible with Apache 2.2.3? Is there a better way to solve this problem?

    Read the article

  • PARTNER WEBCAST SERIES: INNOVATIONS IN APPLICATIONS - PROGRAM

    - by mseika
    Dear Partner, We are pleased to invite you to join the Innovations in Applications webcast series. Innovations in Applications will present Oracle Product's new functions and features including sales positioning. The key objectives of these webcasts are to inspire Partner's personnel to conduct successful sales, after sales and delivery at their Customer. Moreover, we aim to inspire you to conduct further Product Training and Certifications. And finally we'll provide you a chance to join Ecosystem's Product specific Community to learn and to contribute. Innovations in Applications will be presented as per the schedule below after the billable day (4:00 to 5:00 PM CET). The webcast is intended for Partner's Implementation Certified Specialists but Innovations in Applications is open for other Partner's personnel as well. At first, Oracle representative will discuss Oracle's contribution to partners. Then you will see product breakout session followed by Q&A with Oracle Experts. Each session will last for maximum 1 hour. A Q&A document covering all questions and answers will be made available after the webcast. What are the Benefits for partners? Find out how Innovations in Applications helps you to improve your sales, after sales and delivery Discover new functions and features so you can enrich your Customers's solution Learn more about Oracle products, especially sales positioning Hear crucial questions raised by colleague alike, learn from their interest Engage and present your questions to subject experts Be inspired of the richness of Oracle's product portfolio – for your and your customer's benefit. Be inspired to seek further Product Training and Certifications - Make your competence known and recognized! Brand yourself! Note: Should you already be familiar with a specific Product, then choose another one. Doing so you would expand your knowledge of the overall product portfolio. Some presentations contain product demonstration, although these presentations are not intended to be extremely detailed technical presentations. Useful Links for you to bookmark: To access previously presented Products presentations and Public Sector Value Proposition presentations, please go to the Recordings tab. You might want to bookmark the Enablement blog page Oracle Partner Enablement. Please check this regularly as we publish lots of good content here just for you. You might want to bookmark the Knowledge Zones page for solution-focused pages designed to jump start your path towards Specialization. You might want to bookmark the global event calendar page events.oracle.com. Delivery Format Innovations in Applications – program is a series of FREE prerecorded Oracle product presentations followed by Q&A. It will be delivered over the Web. Participants have the opportunity to submit questions during the web cast via chat and subject matter experts will provide verbal answers live. Innovations in Applications consists of several parallel prerecorded product breakout sessions, each lasting for max. 1 hour. At first, Oracle representative will discuss Oracle's contribution to Partners. Then you'll see the product breakout sessions followed by Q&A with Oracle Experts. A Q&A document covering all questions and answers will be made available after the webcast. You can also see Innovations in Applications afterwards as its content will be available online for the next 6-12 months. The next Innovations in Applications webcasts will be presented as follows: July 1st 2013 (please see Next Webcast tab) For more information please click here. Note: Depending on local network bandwidth please allow some seconds time the presentations to download. You might want to refresh your screen by pressing F5. DurationMaximum 1 hour For further information please contact Markku Rouhiainen.

    Read the article

  • Problem in udp socket programing in c

    - by Md. Talha
    I complile the following C code of UDP client after I run './udpclient localhost 9191' in terminal.I put "Enter Text= " as Hello, but it is showing error in sendto as below: Enter text: hello hello : error in sendto()guest-1SDRJ2@md-K42F:~/Desktop$ " Note: I open 1st the server port as below in other terminal ./server 9191. I beleive there is no error in server code. The udp client is not passing message to server. If I don't use thread , the message is passing .But I have to do it by thread. UDP client Code: /* simple UDP echo client */ #include <sys/types.h> #include <sys/socket.h> #include <netinet/in.h> #include <arpa/inet.h> #include <netdb.h> #include <stdio.h> #include <pthread.h> #define STRLEN 1024 static void *readdata(void *); static void *writedata(void *); int sockfd, n, slen; struct sockaddr_in servaddr; char sendline[STRLEN], recvline[STRLEN]; int main(int argc, char *argv[]) { pthread_t readid,writeid; struct sockaddr_in servaddr; struct hostent *h; if(argc != 3) { printf("Usage: %s <proxy server ip> <port>\n", argv[0]); exit(0); } /* create hostent structure from user entered host name*/ if ( (h = gethostbyname(argv[1])) == NULL) { printf("\n%s: error in gethostbyname()", argv[0]); exit(0); } /* create server address structure */ bzero(&servaddr, sizeof(servaddr)); /* initialize it */ servaddr.sin_family = AF_INET; memcpy((char *) &servaddr.sin_addr.s_addr, h->h_addr_list[0], h->h_length); servaddr.sin_port = htons(atoi(argv[2])); /* get the port number from argv[2]*/ /* create a UDP socket: SOCK_DGRAM */ if ( (sockfd = socket(AF_INET,SOCK_DGRAM, 0)) < 0) { printf("\n%s: error in socket()", argv[0]); exit(0); } pthread_create(&readid,NULL,&readdata,NULL); pthread_create(&writeid,NULL,&writedata,NULL); while(1) { }; close(sockfd); } static void * writedata(void *arg) { /* get user input */ printf("\nEnter text: "); do { if (fgets(sendline, STRLEN, stdin) == NULL) { printf("\n%s: error in fgets()"); exit(0); } /* send a text */ if (sendto(sockfd, sendline, sizeof(sendline), 0, (struct sockaddr *) &servaddr, sizeof(servaddr)) < 0) { printf("\n%s: error in sendto()"); exit(0); } }while(1); } static void * readdata(void *arg) { /* wait for echo */ slen = sizeof(servaddr); if ( (n = recvfrom(sockfd, recvline, STRLEN, 0, (struct sockaddr *) &servaddr, &slen)) < 0) { printf("\n%s: error in recvfrom()"); exit(0); } /* null terminate the string */ recvline[n] = 0; fputs(recvline, stdout); }

    Read the article

  • Career-Defining Moments

    - by Robz / Fervent Coder
    Originally posted on: http://geekswithblogs.net/robz/archive/2013/06/25/career-defining-moments.aspx Fear holds us back from many things. A little fear is healthy, but don’t let it overwhelm you into missing opportunities. In every career there is a moment when you can either step forward and define yourself, or sit down and regret it later. Why do we hold back: is it fear, constraints, family concerns, or that we simply can't do it? I think in many cases it comes to the unknown, and we are good at fearing the unknown. Some people hold back because they are fearful of what they don’t know. Some hold back because they are fearful of learning new things. Some hold back simply because to take on a new challenge it means they have to give something else up. The phrase sometimes used is “It’s the devil you know versus the one you don’t.” That fear sometimes allows us to miss great opportunities. In many people’s case it is the opportunity to go into business for yourself, to start something that never existed. Most hold back hear for a fear of failing. We’ve all heard the phrase “What would you do if you knew you couldn’t fail?”, which is intended to get people to think about the opportunities they might create. A better term I heard recently on the Ruby Rogues podcast was “What would be worth doing even if you knew you were going to fail?” I think that wording suits the intent better. If you knew (or thought) going in that you were going to fail and you didn’t care, it would open you up to the possibility of paying more attention to the journey and not the outcome. In my case it is a fear of acceptance. I am fearful that I may not learn what I need to learn or may not do a good enough job to be accepted. At the same time that fear drives me and makes me want to leap forward. Some folks would define this as “The Flinch”. I’m learning Ruby and Puppet right now. I have limited experience with both, limited to the degree it scares me some that I don’t know much about either. Okay, it scares me quite a bit! Some people’s defining moment might be going to work for Microsoft. All of you who know me know that I am in love with automation, from low-tech to high-tech automation. So for me, my “mecca” is a little different in that regard. Awhile back I sat down and defined where I wanted my career to go and it had to do more with DevOps, defined as applying developer practices to system administration operations (I could not find this definition when I searched). It’s an area that interests me and why I really want to expand chocolatey into something more awesome. I want to see Windows be as automatable and awesome as other operating systems that are out there. Back to the career-defining moment. Sometimes these moments only come once in a lifetime. The key is to recognize when you are in one of these moments and step back to evaluate it before choosing to dive in head first. So I am about to embark on what I define as one of these “moments.”  On July 1st I will be joining Puppet Labs and working to help make the Windows automation experience rock solid! I’m both scared and excited about the opportunity!

    Read the article

  • Automating Access 2007 Queries (changing one criteria)

    - by Graphth
    So, I have 6 queries and I want to run them all once at the end of each month. (I know a bit about SQL but they're simply built using Access's design view). So, in the next few days, perhaps I'll run the 6 queries for May, as May just ended. I only want the data from the month that just ended, so the query has Criteria set as the name of the month (e.g., May). Now, it's not hugely time consuming to change all of these each month, but is there some way to automate this? Currently, they're all set to April and I want to change them all to May when I run them in a few days. And each month, I'd like to type the month (perhaps in a textbox in a form or somewhere else if you know a better way) just once and have it change all 6 queries, without having to manually open all 6, scroll over to the right field and change the Criteria. Note (about VBA): I have used Excel VBA so I know the basics of VBA but I don't really know anything specific to Access (other than seeing code a few times). And, others will use this who do not know anything about Access VBA. So, I think I have found a similar question/answer that could do this in VBA, but I'd rather do it some other way. If the query needs to be slightly redesigned later, probably by someone who doesn't know Access VBA at all, it'd be nice to have a solution not involving VBA if that is even possible.

    Read the article

  • Linq to SQL EntitySet Binding the MVVM way

    - by Savvas Sopiadis
    Hi everybody! In a WPF application i'm using LINQ to SQL classes (created by SQL Metal, thus implementing POCOs). Let's assume i have a table User and a Table Pictures. These pictures are actually created from one picture, the difference between them may be the size, coloring,... So every user may has more than one Pictures, so the association is 1:N (User:Pictures). My problems: a) how do i bind, in a MVVM manner, a picture control to one picture (i will take one specific picture) in the EntitySet, to show it up? b) everytime a user changes her picture the whole EntitySet should be thrown away and the newly created Picture(s) should be a added. Is this the correct way? e.g. //create the 1st piture object UserPicture1 = new UserPicture(); UserPicture1.Description = "... some description.. "; USerPicture1.Image = imgBytes; //array of bytes //create the 2nd piture object UserPicture2 = new UserPicture(); UserPicture2.Description = "... another description.. "; UserPicture2.Image = DoSomethingWithPreviousImg(imgBytes); //array of bytes //Assuming that the entityset is called Pictures //add these pictures to the corresponding user User.Pictures.Add(UserPicture1); User.Pictures.Add(UserPicture2); //save changes datacontext.Save() Thanks in advance

    Read the article

  • DomainDataSource DataPager with silverlight 3 DataGrid & .Net RIA Services

    - by Dennis Ward
    I have a simple datagrid example with silverlight 3, and am populating it with the .NET ria services using a DomainDataSource along with a DataPager declaratively (nothing in the code-behind), and am experiencing this problem: The LoadSize is 30, and the Page size is 15, and when the page is loaded, the 1st and 2nd page appear correctly, but when I go beyond the 2nd page, nothing shows up in the grid. This used to work in the silverlight 3 beta with the Mix 2009 preview of .NET Ria services, and I've got a really simple example and have verified that the Service on the web project gets called to load a new batch, but the grid doesn't show any data. Can anyone shed any light as to why grid displays data only for the initial load of data and not subsequent batches from the pager? Here's my xaml: <riaControls:DomainDataSource x:Name="ArtistSource" QueryName="GetArtist" AutoLoad="True" LoadSize="30" PageSize="15"> <riaControls:DomainDataSource.DomainContext> <domain:AdminContext /> </riaControls:DomainDataSource.DomainContext> </riaControls:DomainDataSource> <data:DataGrid Grid.Row="1" x:Name="ArtistDataGrid" ItemsSource="{Binding Data, ElementName=ArtistSource}"> </data:DataGrid> <StackPanel Grid.Row="2"> <data:DataPager Source="{Binding Data, ElementName=ArtistSource}" /> </StackPanel>

    Read the article

  • Apk Expansion Files - Application Licensing - Developer account - NOT_LICENSED response

    - by mUncescu
    I am trying to use the APK Expansion Files extension for Android. I have uploaded the APK to the server along with the extension files. If the application was previously published i get a response from the server saying NOT_LICENSED: The code I use is: APKExpansionPolicy aep = new APKExpansionPolicy(mContext, new AESObfuscator(getSALT(), mContext.getPackageName(), deviceId)); aep.resetPolicy(); LicenseChecker checker = new LicenseChecker(mContext, aep, getPublicKey(); checker.checkAccess(new LicenseCheckerCallback() { @Override public void allow(int reason) { @Override public void dontAllow(int reason) { try { switch (reason) { case Policy.NOT_LICENSED: mNotification.onDownloadStateChanged(IDownloaderClient.STATE_FAILED_UNLICENSED); break; case Policy.RETRY: mNotification.onDownloadStateChanged(IDownloaderClient.STATE_FAILED_FETCHING_URL); break; } } finally { setServiceRunning(false); } } @Override public void applicationError(int errorCode) { try { mNotification.onDownloadStateChanged(IDownloaderClient.STATE_FAILED_FETCHING_URL); } finally { setServiceRunning(false); } } }); So if the application wasn't previously published the Allow method is called. If the application was previously published and now it isn't the dontAllow method is called. I have tried: http://developer.android.com/guide/google/play/licensing/setting-up.html#test-response Here it says that if you use a developer or test account on your test device you can set a specific response, I use LICENSED as response and still get NOT_LINCESED. Resetting the phone, clearing google play store cache, application data. Changing the versioncode number in different combinations still doesn't work. Edit: In case someone else was facing this problem I received an mail from the google support team We are aware that newly created accounts for testing in-app billing and Google licensing server (LVL) return errors, and are working on resolving this problem. Please stay tuned. In the meantime, you can use any accounts created prior to August 1st, 2012, for testing. So it seems to be a problem with their server, if I use the main developer thread everything works fine.

    Read the article

  • Iphone xCode - Navigation Controller on second xib view?

    - by Frames84
    Everything is fine, my navigation controller display's my 'Menu 1' item, but when i click it there appears to be a problem with the: [self.navigationController pushViewController:c animated:YES]; line it doesn't connect to the break point in the myClass file. so I think i've not joined something? but unsure what? my second view with the navigation controller doesn't have direct access to the AppDelegate so can't join it like i see in some tutorials. 1st view is just a button when clicked calls: [self presentModalViewController:mainViewController animated:YES]; my second View 'MainViewController' header looks like: @interface MainViewController :UITableViewController <UITableViewDelegate, UITableViewDataSource> { NSArray *controllers; UINavigationController *navController; } @property (nonatomic, retain) IBOutlet UINavigationController *navControllers; @property (nonatomic, retain) NSArray *controller; Then I have my MainViewController.m @synthesize controllers; @synthesize navController; - (void) viewDidLoad { NSMutableArray *array = [[NSMutaleArray alloc] init]; myClass *c = [[myClass alloc] initWithStyle:UITableViewStylePlain]; c.Title = @"Menu 1"; [array addObject:c]; self.Controllers = array; [array release]; } implemented numberOfRowsInSection and cellForRowAtIndexPath - (void) tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath { NSUInteger row = [indexPath row]; myClass *c = [self.controllers objectAtIndex:row]; [self.navigationController pushViewController:c animated:YES]; // doesn't load myClass c // [self.navController pushViewController:c animated:YES]; } Also in interface designer I dragged a Navigation Controller onto my new XIB and changed the Root View Controller class to MainViewController and also connected the File Owner connector to the Navigation Controller to connect the navController Outlet. Thanks for you time.

    Read the article

  • curl POST to RESTful services

    - by Sashikiran Challa
    Hello All, There are a lot of questions on Stackoverflow about curl but I could not figure out what is that I am doing what I am not supposed to. I am trying to call a RESTful service that I had written using Jersey API and am trying to POST an xml string to it and I get HTTP 415 error which is supposed to be a Media Type error. Here in my shell script call to 1st service: abc=curl http://gf...:8080/InChItoD/inchi/3dstructure?InChIstring=$inchi echo $abc (this works fine the output that it returns is given below.) Posting this xml string to second service def= curl -d $abc -H "Content-Type:text/xml" http://gf...:8080/XML2G/xml3d/gssinput I get the following error: ... ... HTTP Status 415 Status report message description.The server refused this request because the request entity is in a format not supported by the requested resource for the requested method ().Apache Tomcat/6.0.26 This is a sample of xml string I am trying to POST <?xml version="1.0"?><molecule xmlns="http://www.xml-cml.org/schema"> <atomArray> <atom id="a1" elementType="N" formalCharge="1" x3="0.997963" y3="-0.002882" z3="-0.004222"/> <atom id="a2" elementType="H" x3="2.024650" y3="-0.002674" z3="0.004172"/> <atom id="a3" elementType="H" x3="0.655444" y3="0.964985" z3="0.004172"/> <atom id="a4" elementType="H" x3="0.649003" y3="-0.496650" z3="0.825505"/> <atom id="a5" elementType="H" x3="0.662767" y3="-0.477173" z3="-0.850949"/> </atomArray> <bondArray> <bond atomRefs2="a1 a2" order="1"/> <bond atomRefs2="a1 a3" order="1"/> <bond atomRefs2="a1 a4" order="1"/> <bond atomRefs2="a1 a5" order="1"/> </bondArray></molecule> Thanks in advance

    Read the article

  • AS 400 Performance from .Net iSeries Provider

    - by Nathan
    Hey all, First off, I am not an AS 400 guy - at all. So please forgive me for asking any noobish questions here. Basically, I am working on a .Net application that needs to access the AS400 for some real-time data. Although I have the system working, I am getting very different performance results between queries. Typically, when I make the 1st request against a SPROC on the AS400, I am seeing ~ 14 seconds to get the full data set. After that initial call, any subsequent calls usually only take ~ 1 second to return. This performance improvement remains for ~ 20 mins or so before it takes 14 seconds again. The interesting part with this is that, if the stored procedure is executed directly on the iSeries Navigator, it always returns within milliseconds (no change in response time). I wonder if it is a caching / execution plan issue but I can only apply my SQL SERVER logic to the AS400, which is not always a match. Any suggestions on what I can do to recieve a more consistant response time or simply insight as to why the AS400 is acting in this manner when I was using the iSeries Data Provider for .Net? Is there a better access method that I should use? Just in case, here's the code I am using to connect to the AS400 Dim Conn As New IBM.Data.DB2.iSeries.iDB2Connection(ConnectionString) Dim Cmd As New IBM.Data.DB2.iSeries.iDB2Command("SPROC_NAME_HERE", Conn) Cmd.CommandType = CommandType.StoredProcedure Using Conn Conn.Open() Dim Reader = Cmd.ExecuteReader() Using Reader While Reader.Read() 'Do Something End While Reader.Close() End Using Conn.Close() End Using

    Read the article

  • asp.net mvc, IIS 6 vs IIS7.5, and integrated windows authentication causing javascript errors?

    - by chris
    This is a very strange one. I have an asp.net MVC 1 app. Under IIS6, with no anon access - only integrated windows auth - every thing works fine. I have the following on most of my Foo pages: <% using (Html.BeginForm()) { %> Show All: <%= Html.CheckBox("showAll", new { onClick = "$(this).parent('form:first').submit();" })%> <% } %> Clicking on the checkbox causes a post, the page is reloaded, everything is good. When I look at the access logs, that's what I see, with one oddity - the js library is requested during the page first request, but not for any subsequent page requests. Log looks like: GET / 401 GET / 200 GET /Content/Site.css 304 GET /Scripts/jquery-1.3.2.min.js 401 GET /Scripts/jquery-ui-1.7.2.custom.min.js 401 GET /Scripts/jquery.tablesorter.min.js 401 GET /Scripts/jquery-1.3.2.min.js 304 GET /Scripts/jquery-ui-1.7.2.custom.min.js 304 GET /Scripts/jquery.tablesorter.min.js 304 GET /Content/Images/logo.jpg 401 GET /Content/Images/logo.jpg 304 GET /Foo 401 GET /Foo 200 POST /Foo/Delete 302 GET /Foo/List 200 POST /Foo/List 200 This corresponds to home page, click on "Foo", delete a record, click a checkbox (which causes the 2nd POST). Under IIS7.5, it sometimes fails - the click on the check box doesn't cause a postback, but there are no obvious reasons why. I've noticed under IIS7.5 that every single page request re-issues the requests for the js libraries - the first one a 401, followed by either a 200 (OK) or 304 (not modified), as opposed to the above log extract where that only happened during the 1st request. Is there any way to eliminate the 401 requests? Could a timing issue have something to do with the click being ignored? Would increasing the number of concurrent connections help? Any other ideas? I'm at a bit of a loss to explain this.

    Read the article

  • CSS positioning inside div

    - by christian
    I am using a div with 2 elements inside and I want to position my 1st element to be vertically aligned top and 2nd element to the bottom of the div. The div is the right portion of my page and equal to the height of my main content. #right { float:right; width: 19%; background:#FF3300; margin-left:2px; padding-bottom: 100%; margin-bottom: -100%; } #right .top { display:block; background-color:#CCCCCC; } #right .bottom { bottom:0px; display:block; background-color:#FFCCFF; height:60px; } HTML: <div id="right"> <span class="top">Top element</span> <span class="bottom"><img src="images/logo_footer1.gif" width="57" height="57" align="left" class="img">&nbsp;<img src="images/logo_footer2.gif" width="57" height="57" align="right" class="img"></span> </div> I want the right div to be like this:

    Read the article

  • Need help with artificial neural network

    - by deckard cain
    I have an input data for neural network that consists of 2 vectors with 200 elements, that i got from some program for generating signals. So it is actually 2x200 input to my nnet. As target data, i have one 1x200 vector that i also got from the same program. That is my training data set. I gather as much of those sets as i want so i transfer them to matlab and save them as, for example, set1, set2,.... When i am creating neural net, using newfit function (backropagation algorithm and everyhting else is set by default because i am kind of unexperianced with neural nets so i will have to experiment) i'm creating it using set1 only for example. Then, when i am to train neural net i train it for set1 then load set2 and train for it and so on. so its like this function net = create_fit_net(inputs,targets) numHiddenNeurons = 20; net = newfit(inputs,targets,numHiddenNeurons); net=train(net,inputs,targets); load set2; net=train(net,inputs,targets); load set3; net=train(net,inputs,targets); load set4; net=train(net,inputs,targets); i am using 4 sets of data here and all sets have variables of same name and size. My quesion is, am i doing this the right way, because, when doing simulation in some other m file, i am getting bad results, and every time i get different results. Does it matter if i create network with one set and then train with others too, and does it matter what set do i use to train network 1st? Also, i am confused about the amount of neurons to use (im using in the example 20 but actually i tried 1, 10, 30, 50, 100 200 and even 300 and i get nothing). If you have any suggestions, i'd be glad to take them into consideration. Any help is welcome. thanks in advance

    Read the article

  • Flex 3.5.0; Update ComboBox display list upon dataprovider change

    - by Gabriel Poama-Neagra
    Hello, I have two related ComboBoxes ( continents, and countries ). When the continents ComboBox changes I request a XML from a certain URL. When I receive that XML i change the DataProvider for the countries ComboBox, like this: public function displayCountryArray( items:XMLList ):void { this.resellersCountryLoader.alpha = 0; this.resellersCountry.dataProvider = items; this.resellersCountry.dispatchEvent( new ListEvent( ListEvent.CHANGE ) ); } I dispatch the ListEvent.CHANGE because I use it to change another ComboBox so please ignore that (and the 1st line ). So, my problem is this: I select "ASIA" from the first continents, then the combobox DATA get's updated ( I can see that because the first ITEM is an item with the label '23 countries' ). I click the combo then I can see the countries. NOW, I select "Africa", the first item is displayed, with the ComboBox being closed, then when I click it, the countries are still the ones from Asia. Anyway, if I click an Item in the list, then the list updates correctly, and also, it has the correct info ( as I said it affects other ComboBoxes ). SO the only problem is that the display list does not get updated. In this function I tried these approaches Converting XMLList to XMLCollection and even ArrayCollection Adding this.resellersCountry.invalidateDisplayList(); Triggering events like DATA_CHANGE and UPDATE_COMPLETE I know they don't make much sense, but I got a little desperate. Please note that when I used 3.0.0 SDK this did not happen. Sorry if I'm stupid, but the flex events are killing me.

    Read the article

  • Slow query with unexpected index scan

    - by zerkms
    Hello I have this query: SELECT * FROM sample INNER JOIN test ON sample.sample_number = test.sample_number INNER JOIN result ON test.test_number = result.test_number WHERE sampled_date BETWEEN '2010-03-17 09:00' AND '2010-03-17 12:00' the biggest table here is RESULT, contains 11.1M records. The left 2 tables about 1M. this query works slowly (more than 10 minutes) and returns about 800 records. executing plan shows clustered index scan (over it's PRIMARY KEY (result.result_number, which actually doesn't take part in query)) over all 11M records. RESULT.TEST_NUMBER is a clustered primary key. if I change 2010-03-17 09:00 to 2010-03-17 10:00 - i get about 40 records. it executes for 300ms. and plan shows index seek (over result.test_number index) if i replace * in SELECT clause to result.test_number (covered with index) - then all become fast in first case too. this points to hdd IO issues, but doesn't clarifies changing plan. so, any ideas? UPDATE: sampled_date is in table sample and covered by index. other fields from this query: test.sample_number is covered by index and result.test_number too. UPDATE 2: obviously than sql server in any reasons don't want to use index. i did a small experiment: i remove INNER JOIN with result, select all test.test_number and after that do SELECT * FROM RESULT WHERE TEST_NUMBER IN (...) this, of course, works fast. but i cannot get what is the difference and why query optimizer choose such inappropriate way to select data in 1st case.

    Read the article

  • Auth problem on Facebook using Ruby/sinatra/frankie/facebooker

    - by user84584
    Hello guys, I'm using sinatra/frankie/facebooker to prototype something simple to test the facebook api, i'm using mmangino-facebooker the more recent version from github and I cloned the most recent version of frankie. I'm using sinatra 0.9.6. My main code is as simple as possible: before do ensure_application_is_installed_by_facebook_user @user = session[:facebook_session].user @photos = session[:facebook_session].get_photos(nil,@user.uid,nil) end get "/" do erb :index end get "/:uid/:image" do |uid,image| @photo_selected = session[:facebook_session].get_photos([image.to_i],nil,nil) erb :selected end The index page just renders a link to the other one (identified by regex "/:uid/:image") however I always get an error when it's trying to render the one identified by regex "/:uid/:image" Facebooker::Session::MissingOrInvalidParameter: Invalid parameter /Library/Ruby/Gems/1.8/gems/mmangino-facebooker-1.0.50/lib/facebooker/parser.rb:610:in `process' /Library/Ruby/Gems/1.8/gems/mmangino-facebooker-1.0.50/lib/facebooker/parser.rb:30:in `parse' /Library/Ruby/Gems/1.8/gems/mmangino-facebooker-1.0.50/lib/facebooker/service.rb:67:in `post' /Library/Ruby/Gems/1.8/gems/mmangino-facebooker-1.0.50/lib/facebooker/session.rb:600:in `post_without_logging' /Library/Ruby/Gems/1.8/gems/mmangino-facebooker-1.0.50/lib/facebooker/session.rb:611:in `post' /Library/Ruby/Gems/1.8/gems/mmangino-facebooker-1.0.50/lib/facebooker/logging.rb:20:in `log_fb_api' /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/benchmark.rb:308:in `realtime' /Library/Ruby/Gems/1.8/gems/mmangino-facebooker-1.0.50/lib/facebooker/logging.rb:20:in `log_fb_api' /Library/Ruby/Gems/1.8/gems/mmangino-facebooker-1.0.50/lib/facebooker/session.rb:610:in `post' /Library/Ruby/Gems/1.8/gems/mmangino-facebooker-1.0.50/lib/facebooker/session.rb:198:in `secure!' ./config/frankie/lib/frankie.rb:66:in `secure_with_token!' ./config/frankie/lib/frankie.rb:44:in `set_facebook_session' ./config/frankie/lib/frankie.rb:164:in `ensure_authenticated_to_facebook' ./config/frankie/lib/frankie.rb:169:in `ensure_application_is_installed_by_facebook_user' I've no idea why, it seems to be related with the auth token I guess.. I logged the request made o the fb rest server: {:sig="4f244d1f510498f4efaae3c03d036a85", :generate_session_secret="0", :method="facebook.auth.getSession", :auth_token="9dae0d02c19c680b574c78d202b0582a", :api_key="70c14732815ace0ae71a561ea5eb38b7", :v="1.0"} {:call_id="1269003766.05665", :sig="194469457d1424dc8ba0678979692363", :method="facebook.photos.get", :subj_id=750401957, :session_key="2.lXL0z3s4_r573xzQwAiA9A__.3600.1269010800-750401957", :api_key="70c14732815ace0ae71a561ea5eb38b7", :v="1.0"} {:sig="4f244d1f510498f4efaae3c03d036a85", :generate_session_secret="0", :method="facebook.auth.getSession", :auth_token="9dae0d02c19c680b574c78d202b0582a", :api_key="70c14732815ace0ae71a561ea5eb38b7", :v="1.0"} The last one gives the error, it could be related with auth_token having the same value in the 1st and on the 3rd ? Cheers and tks, Ze Maria

    Read the article

  • model not showing up in django admin.

    - by Zayatzz
    Hi. I have ceated several django apps and stuffs for my own fund and so far everything has been working fine. Now i just created new project (django 1.2.1) and have run into trouble from 1st moments. I created new app - game and new model Game. i created admin.py and put related stuff into it. Ran syncdb and went to check into admin. Model did not show up there. I proceeded to check and doublecheck and read through previous similar threads: http://stackoverflow.com/questions/1839927/registered-models-do-not-show-up-in-admin http://stackoverflow.com/questions/1694259/django-app-not-showing-up-in-admin-interface But as far as i can tell, they dont help me either. Perhaps someone else can point this out for me. models.py in game app: # -*- coding: utf-8 -*- from django.db import models class Game(models.Model): type = models.IntegerField(blank=False, null=False, default=1) teamone = models.CharField(max_length=100, blank=False, null=False) teamtwo = models.CharField(max_length=100, blank=False, null=False) gametime = models.DateTimeField(blank=False, null=False) admin.py in game app: # -*- coding: utf-8 -*- from jalka.game.models import Game from django.contrib import admin class GameAdmin(admin.ModelAdmin): list_display = ['type', 'teamone', 'teamtwo', 'gametime'] admin.site.register(Game, GameAdmin) project settings.py: MIDDLEWARE_CLASSES = ( 'django.middleware.common.CommonMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', ) ROOT_URLCONF = 'jalka.urls' TEMPLATE_DIRS = ( "/home/projects/jalka/templates/" ) INSTALLED_APPS = ( 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.sites', 'django.contrib.messages', 'django.contrib.admin', 'game', ) urls.py: from django.conf.urls.defaults import * # Uncomment the next two lines to enable the admin: from django.contrib import admin admin.autodiscover() urlpatterns = patterns('', # Example: # (r'^jalka/', include('jalka.foo.urls')), (r'^admin/', include(admin.site.urls)), ) Alan.

    Read the article

  • how to use a regex to search backwards effectively?

    - by Asaf
    hi, i'm searching forward in an array of strings with a regex, like this: for (int j = line; j < lines.length; j++) { if (lines[j] == null || lines[j].isEmpty()) { continue; } matcher = pattern.matcher(lines[j]); if (matcher.find(offset)) { offset = matcher.end(); line = j; System.out.println("found \""+matcher.group()+"\" at line "+line+" ["+matcher.start()+","+offset+"]"); return true; } offset = 0; } return false; note that in my implementation above i save the line and offset for continuous searches. anyway, now i want to search backwards from that [line,offset]. my question: is there a way to search backwards with a regex efficiently? if not, what could be an alternative? 10x, asaf :-) clarification: by backwards i mean finding the previous match. for example, say that i'm searching for "dana" in "dana nama? dana kama! lama dana kama?" and got to the 2nd match. if i do matcher.find() again, i'll search forward and get the 3rd match. but i want to seach backwards and get to the 1st match. the code above should then output something like: found "dana" at line 0 [0,3] // fwd found "dana" at line 0 [11,14] // fwd found "dana" at line 0 [0,3] // bwd

    Read the article

  • MSMQ first Message.Body in queue is OK, all following Message.Body in queue are empty

    - by Andrew A
    I send a handful of identical (except for Id#, obviously) messages to an MSMQ queue on my local machine. The body of the messages is a serialized XElement object. When I try to process the first message in the queue, I am able to successfully de-serialize the Message.Body object and save it to file. However, when trying to process the next (or any subsequent) message, the Message.Body is absent, and an exception is thrown. I have verified the Message ID's are correct for the message attempting to be processed. The XML being serialized is properly formed. Any ideas? I am basing my code on the Microsoft MSMQ Book order sample found here: http://msdn.microsoft.com/en-us/library/ms180970%28VS.80%29.aspx // Create Envelope XML object XElement envelope = new XElement(env + "Envelope", new XAttribute(XNamespace.Xmlns + "env", env.NamespaceName) <snip> //Send envelope as message body MessageQueue myQueue = new MessageQueue(String.Format(@"FORMATNAME:DIRECT=OS:localhost\private$\mqsample")); myQueue.DefaultPropertiesToSend.Recoverable = true; // Prepare message Message myMessage = new Message(); myMessage.ResponseQueue = new MessageQueue(String.Format(System.Globalization.CultureInfo.InvariantCulture, @"FORMATNAME:DIRECT=TCP:192.168.1.217\private$\mqdemoAck")); myMessage.Body = envelope; // Send the message into the queue. myQueue.Send(myMessage,"message label"); //Retrieve messages from queue LabelIdMapping labelID = (LabelIdMapping)mqlistBox3.SelectedItem; System.Messaging.Message message = mqOrderQueue.ReceiveById(labelID.Id); The Message.Body value I see on the 1st retrieve is as expected: <?xml version="1.0" encoding="utf-8"?> <string>Some String</string> However, the 2nd and subsequent retrieve operations Message.Body is: "Cannot deserialize the message passed as an argument. Cannot recognize the serialization format." How does this work fine the first time but not after that? I have tried message.Dispose() after retrieving it but it did not help. Thank you very much for any help on this!

    Read the article

  • Moq for Silverlight doesn't raise event

    - by Budda
    Trying to write Unit test for Silverlight 4.0 using Moq 4.0.10531.7 public delegate void DataReceived(ObservableCollection<TeamPlayerData> AllReadyPlayers, GetSquadDataCompletedEventArgs squadDetails); public interface ISquadModel : IModelBase { void RequestData(int matchId, int teamId); void SaveData(); event DataReceived DataReceivedEvent; } void MyTest() { Mock<ISquadModel> mockSquadModel = new Mock<ISquadModel>(); mockSquadModel.Raise(model => model.DataReceivedEvent += null, EventArgs.Empty); } Instead of raising the 'DataReceivingEvent' the following error is received: Object of type 'Castle.Proxies.ISquadModelProxy' cannot be converted to type 'System.Collections.ObjectModel.ObservableCollection`1[TeamPlayerData]'. Why attempt to convert mock to the type of 1st event parameter is performed? How can I raise an event? I've also tried another approach: mockSquadModel .Setup(model => model.RequestData(TestMatchId, TestTeamId)) .Raises(model => model.DataReceivedEvent += null, EventArgs.Empty) ; this should raise event if case somebody calls 'Setup' method... Instead the same error is generated... Any thoughts are welcome. Thanks

    Read the article

< Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >