Search Results

Search found 3262 results on 131 pages for '410 gone'.

Page 15/131 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • Redirect to prevent dup submission...but then you loose existing data

    - by coffeeaddict
    Here's the scenario User is on your checkout.aspx page Somewhere in the process, when clicking the pay button, you redirect the user to an intermediate page (before the confirmation page) to do some other logic. That intermediate page performs whatever logic based on a querystring flag you sent with the redirect from the checkout page This intermediate page also serves as an error page. So if any logic in the intermediatePage.aspx.cs fails I'm setting a message to be displayed on this page to the user If I refresh, that querystring value is still in the url..hence when it hits my Page_Load again, then the server-side logic is called & run again and I don't want this to happen The avoid this behavior/problem, the logical next step is to do a redirect back to the same page if they refresh (not sure how you'd catch that) to get rid of that querystring But when you redirect back to the same page your error message is gone, lost in the redirect therefore you end up showing them the same page but all values for the error message are now gone I do not want to solve this with Javascript either. I am not sure the best way to handle this.

    Read the article

  • Xcode App Crash-When connecting to the ODATA services [on hold]

    - by user3685677
    Can someone help me resolving the following issue: When trying to connect from iPad app to SAP ECC system through ODATA channel services via SUP, it is allowing me to login for the first time and could retrieve the data successfully from SAP system. But when I logout and try logging in again with the same session, application gets crashed. Below is the crash report for your reference. I am using SDM Parser to connect the SAP system. SDMODataServiceDocumentParser *sdmDocParser = [[SDMODataServiceDocumentParser alloc] init]; [sdmDocParser parse:aServiceDocument]; m_serviceDocument = sdmDocParser.serviceDocument; //Load the object with metadata xml: SDMODataMetaDocumentParser *sdmMetadataParser = [[SDMODataMetaDocumentParser alloc] initWithServiceDocument:m_serviceDocument]; [sdmMetadataParser parse:aMetadata]; After initiated the service, setting the URL. [service setServiceDocumentUrl:m_serviceDocumentURL]; Using SDMconnectivityhelper to connect the URL id<SDMRequesting> serviceDocumentRequest2 = [connectivityHelper executeBasicSyncRequestWithQuery3:[[ODataQuery alloc]initWithURL:[NSURL URLWithString:encodedStrUrl]]] ; - (id <SDMRequesting>)executeBasicSyncRequestWithQuery3:(ODataQuery *)aQuery { id<SDMRequesting> request = [self createRequestWithQuery:aQuery]; [request setTimeOutSeconds:TIMEOUT_SEC]; [request setRequestMethod:@"GET"]; [request addRequestHeader:@"Content-Type" value:@"application/xml"]; [request startSynchronous];**[App getting CRASH in this line]** return request; } - (id <SDMRequesting>)createRequestWithQuery:(ODataQuery *)aQuery { if (isSUPMode) { [SDMRequestBuilder setRequestType:SUPRequestType]; } else { [SDMRequestBuilder setRequestType:HTTPRequestType]; } id <SDMRequesting> request = [SDMRequestBuilder requestWithURL:aQuery.URL]; request.username = self.username; request.password = self.password; return request; } Crash Report:- Incident Identifier: 347511BA-5F7F-45D4-8662-D5DCD2F88EA7 CrashReporter Key: 9a4d38cf19b1a94476eb6b2170d4f56678d6ca60 Hardware Model: iPad3,4 Path: /var/mobile/Applications/F38AD64F-03F8-4A21- Exception Type: EXC_BAD_ACCESS (SIGSEGV) Exception Subtype: KERN_INVALID_ADDRESS at 0x00000000 Triggered by Thread: 0 Thread 0 Crashed: 0 libsystem_platform.dylib 0x393a94c0 _platform_memmove$VARIANT $Swift + 160 1 Eby Sales Order 0x0015a2c8 0xb7000 + 668360 2 Eby Sales Order 0x0015a8b8 0xb7000 + 669880 3 Eby Sales Order 0x003331ee 0xb7000 + 2605550 4 Eby Sales Order 0x0031856e 0xb7000 + 2495854 5 Eby Sales Order 0x00338454 0xb7000 + 2626644 6 Eby Sales Order 0x000e6ad8 0xb7000 + 195288 7 Eby Sales Order 0x000e99a0 0xb7000 + 207264 8 Eby Sales Order 0x000ea442 0xb7000 + 209986 9 Eby Sales Order 0x000eb0d6 0xb7000 + 213206 10 Eby Sales Order 0x000c13d0 0xb7000 + 41936 11 Foundation 0x2ec93112 __NSFireDelayedPerform + 410 12 CoreFoundation 0x2e27ef4c __CFRUNLOOP_IS_CALLING_OUT_TO_A_TIMER_CALLBACK_FUNCTION__ + 12 13 CoreFoundation 0x2e27eb66 __CFRunLoopDoTimer + 790 14 CoreFoundation 0x2e27ceee __CFRunLoopRun + 1214 15 CoreFoundation 0x2e1e7764 CFRunLoopRunSpecific + 520 16 CoreFoundation 0x2e1e7546 CFRunLoopRunInMode + 102 17 GraphicsServices 0x331216ce GSEventRunModal + 134 18 UIKit 0x30b4688c UIApplicationMain + 1132 19 Eby Sales Order 0x000bd8da 0xb7000 + 26842 20 Eby Sales Order 0x000bd89c 0xb7000 + 26780

    Read the article

  • Android TranslateAnimation resets after animation

    - by monmonja
    I'm creating something like a SlideDrawer but with most customization, basically the thing is working but the animation is flickering at the end. To further explain, I got an TranslateAnimation then after this animation it returns back to the original position, if i set setFillAfter then the buttons inside the layout stops working. If i listen to onAnimationEnd and set other's layout to View.GONE the layout fickers. Judging from it is that on animation end, the view goes back to original position before the View.GONE is called. Any advice would be awesome. Thanks

    Read the article

  • PHP] How can I connect to MySQL on wamp server?

    - by user294359
    This might be ridiculously easy for you.. but I've been struggling with this for an hour.. :( <?php $connect = mysql_connect("localhost:8080", "root", "mypassword"); echo($connect);?> This is the code that I'm trying to run - you can see that I'm using 8080 as my port number and, of course, I have html codes as well. However, it gives me the following error msgs whenever I try to open the php file. ==================================================================================== Warning: mysql_connect() [function.mysql-connect]: MySQL server has gone away in C:\wamp\www\php_sandbox\index.php on line 2 Warning: mysql_connect() [function.mysql-connect]: Error while reading greeting packet. PID=4932 in C:\wamp\www\php_sandbox\index.php on line 2 Warning: mysql_connect() [function.mysql-connect]: MySQL server has gone away in C:\wamp\www\php_sandbox\index.php on line 2 ===================================================================================== I don't know... what's wrong with this. Is it because of the port number?

    Read the article

  • Low load average with plenty of cpu-intersive processes

    - by sds
    I see loadavg at about 1 with at least 3 processes running at full tile. How can that be? top - 11:48:32 up 147 days, 5:38, 8 users, load average: 1.08, 1.11, 1.05 Tasks: 416 total, 4 running, 410 sleeping, 2 stopped, 0 zombie Cpu0 : 43.3%us, 13.7%sy, 0.0%ni, 43.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu1 : 48.8%us, 12.4%sy, 0.0%ni, 38.8%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu2 : 0.7%us, 0.7%sy, 0.0%ni, 98.3%id, 0.0%wa, 0.0%hi, 0.3%si, 0.0%st Cpu3 : 99.3%us, 0.7%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu4 : 0.0%us, 0.3%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu5 : 5.7%us, 0.7%sy, 0.0%ni, 93.6%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu6 : 2.3%us, 0.3%sy, 0.0%ni, 97.4%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu7 : 0.3%us, 0.3%sy, 0.0%ni, 99.0%id, 0.0%wa, 0.0%hi, 0.3%si, 0.0%st Cpu8 : 38.4%us, 17.4%sy, 0.0%ni, 44.2%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu9 : 43.4%us, 13.5%sy, 0.0%ni, 43.1%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu10 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu11 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu12 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu13 : 0.3%us, 0.3%sy, 0.0%ni, 99.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu14 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu15 : 1.0%us, 0.7%sy, 0.0%ni, 98.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 132145404k total, 88125080k used, 44020324k free, 516476k buffers Swap: 8388600k total, 620232k used, 7768368k free, 55729064k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 25424 jonathan 20 0 4404m 4.1g 3268 R 99.7 3.3 212:58.17 python2.7 20939 sam 20 0 908m 733m 3376 R 81.2 0.6 603:08.07 python2.7 20987 sam 20 0 908m 732m 3376 R 79.8 0.6 598:49.18 python2.7 25428 jonathan 20 0 774m 164m 15m S 14.2 0.1 24:22.60 java 20996 sam 20 0 98.4m 7780 1880 S 4.3 0.0 17:48.15 vw 20941 sam 20 0 161m 70m 1880 S 3.0 0.1 18:10.03 vw 20940 sam 20 0 98.4m 8068 1880 S 2.6 0.0 18:06.28 vw 20942 sam 20 0 98.4m 8080 1880 S 2.6 0.0 17:39.45 vw 20944 sam 20 0 161m 71m 1880 S 2.6 0.1 17:29.29 vw 20947 sam 20 0 161m 71m 1880 S 2.6 0.1 17:25.58 vw 20959 sam 20 0 161m 70m 1880 S 2.6 0.1 17:28.00 vw 20962 sam 20 0 161m 70m 1880 S 2.6 0.1 17:26.96 vw 20963 sam 20 0 98.4m 8076 1880 S 2.6 0.0 18:07.19 vw 20965 sam 20 0 161m 71m 1880 S 2.6 0.1 18:08.13 vw 20995 sam 20 0 161m 71m 1880 S 2.6 0.1 17:38.67 vw 6399 root 20 0 558m 19m 5028 S 2.3 0.0 4329:56 BESClient 20945 sam 20 0 98.4m 8068 1880 S 2.3 0.0 17:35.38 vw 20948 sam 20 0 98.4m 8068 1880 S 2.3 0.0 17:26.01 vw 20950 sam 20 0 161m 70m 1880 S 2.3 0.1 17:25.79 vw 20952 sam 20 0 98.4m 8076 1880 S 2.3 0.0 17:32.94 vw 20955 sam 20 0 161m 70m 1880 S 2.3 0.1 17:26.61 vw 20956 sam 20 0 98.4m 8072 1880 S 2.3 0.0 17:34.76 vw 20960 sam 20 0 98.4m 8072 1880 S 2.3 0.0 17:34.04 vw Adding up CPU loads gives about 300%. The top process list also adds up to about 300%. Why is load average about 1?

    Read the article

  • Chrome does not re-draw properly on Windows 8

    - by Akshat Mittal
    There are a lot of problems with Chrome (24.0.1312.14 beta || But all this happened before update also) on Windows 8. Problems and explanations are listed below: Google Chrome re-draw time: When I switch tabs, the window retains the content of the previous tab and displays that even if I move my mouse, if only refreshes (re-draws) when there is a change on the webpage (like on hover) or I do a select all (or scroll). One thing to note is that the hover and select happens on the real page and not the retained image-like thing of the older webpage. Chrome is slow and laggy: Websites such as Facebook and Twitter (and more) have gone extremely laggy on Chrome (Win 8). When I was using Windows 7, I never experienced a lag or something. Also when using HTML-5 Websites, the transition (the -webkit-transition in CSS) goes extremely slow at times. Plugins Crash: Plugins like Flash Player, Shockwave Player, and more that are in-built into Chrome Crashes a lot, even when doing simple tasks like playing YouTube Videos, displaying ads or something. Chrome Crashes: Chrome has crashed over 100 times in the past month. Google Chrome just crashes randomly or I don't know the reason. Random Page crashes: Chrome results chrome://crash/(Copy-Paste this in address bar) on random pages even when the page is just loaded, I understand that this can happen on heavy HTML5 or JS websites but what about HTML only websites? Computer Freeze: Chrome sometimes, randomly, freezes my computer. Freeze in the sense, none of the other apps are also working. It's like the whole system freezes, I can not even switch to other apps. I am sure that this is because of Chrome since this happens only when Chrome is active. Most of the things above happens on Super User also, Super User never had any problem when using Chrome on Windows 7. UPDATE 1: @magicandre1981 Commented for trying to disable Hardware Acceleration. I tried it, it somewhat solved the problem but din't fix it. I am still experiencing all the above issues but less frequently (maybe because Chrome Restarted Completely) UPDATE 2: @avirk asked me to try a Stable Version of Chrome and Firefox, I din't experience any lag in Firefox, a little (negligible) lag in Chrome 22 (Maybe because its a new copy of Chrome, I haven't used it much). UPDATE 3: @NothingsImpossible said that He is also experiencing the same problem on Windows Server 2008! This seams to be a major issue now. He also said that GPU load is also high at the same time! Even I saw the same thing. UPDATE 4: Recently, Chrome updated to v24 Stable (I am using stable from a long time now). I was experiencing this problem a lot less in Chrome 23, but this is back in Chrome 24. Seams like Chrome 24 is the most affected from this bug, as this same problem was high in Chrome 24 beta also. UPDATE 5: Chrome was updated to v25 Stable. This problem is 99% Gone, it is still there in 1% of the cases. One such example is when I leave chrome inactive for a while with a few tabs open, the tabs go black and no activity can get them back to active state. If I open a new tab, the new tab is OK but the others are still black, I need to close all those tabs. UPDATE 6: Chrome updated to v27 Stable channel, this problem is nearly gone. This does happen occasionally, but not as frequent as in earlier versions of Chrome. UPDATE 7: I am on Chrome v35.0.1916.114 Stable, Windows 8.1 Pro Update 1. Some of the other problems appears to be back. Chrome is slow and laggy again. Re-draw time is getting worse. Is anybody else experiencing such issues? Does anybody have a solution to any of these?

    Read the article

  • Load average is have been high over some period

    - by user111196
    We have a dedicated MySQL server and below is the a snapshot of the top. The load average has been staying at nearly 100 for an hour plus ready. top - 20:54:28 up 7:31, 2 users, load average: 83.08, 96.88, 106.23 Tasks: 278 total, 2 running, 274 sleeping, 2 stopped, 0 zombie Cpu0 : 18.8%us, 10.2%sy, 0.0%ni, 70.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu1 : 51.2%us, 4.3%sy, 0.0%ni, 44.2%id, 0.0%wa, 0.0%hi, 0.3%si, 0.0%st Cpu2 : 9.0%us, 10.3%sy, 0.0%ni, 80.6%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu3 : 18.8%us, 7.4%sy, 0.0%ni, 73.8%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu4 : 7.8%us, 8.8%sy, 0.0%ni, 83.4%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu5 : 10.3%us, 8.4%sy, 0.0%ni, 81.4%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu6 : 6.2%us, 7.5%sy, 0.0%ni, 86.2%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu7 : 6.2%us, 6.2%sy, 0.0%ni, 87.3%id, 0.0%wa, 0.0%hi, 0.3%si, 0.0%st Cpu8 : 8.8%us, 10.4%sy, 0.0%ni, 80.5%id, 0.0%wa, 0.0%hi, 0.3%si, 0.0%st Cpu9 : 63.7%us, 4.6%sy, 0.0%ni, 12.2%id, 0.0%wa, 4.3%hi, 15.2%si, 0.0%st Cpu10 : 9.2%us, 10.2%sy, 0.0%ni, 80.6%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu11 : 17.3%us, 5.9%sy, 0.0%ni, 76.8%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu12 : 8.0%us, 8.7%sy, 0.0%ni, 83.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu13 : 10.9%us, 7.4%sy, 0.0%ni, 81.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu14 : 6.2%us, 6.9%sy, 0.0%ni, 86.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu15 : 4.8%us, 6.1%sy, 0.0%ni, 89.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 33009800k total, 23174396k used, 9835404k free, 120604k buffers Swap: 35061752k total, 0k used, 35061752k free, 16459540k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 3341 mysql 20 0 14.3g 4.6g 4240 S 417.8 14.5 1673:51 mysqld 24406 root 20 0 15008 1292 876 R 0.3 0.0 0:00.19 top 1 root 20 0 4080 852 608 S 0.0 0.0 0:01.92 init 2 root 15 -5 0 0 0 S 0.0 0.0 0:00.00 kthreadd 3 root RT -5 0 0 0 S 0.0 0.0 0:00.32 migration/0 4 root 15 -5 0 0 0 S 0.0 0.0 0:00.29 ksoftirqd/0 5 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/0 6 root RT -5 0 0 0 S 0.0 0.0 0:03.21 migration/1 7 root 15 -5 0 0 0 S 0.0 0.0 0:00.07 ksoftirqd/1 8 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/1 9 root RT -5 0 0 0 S 0.0 0.0 0:00.17 migration/2 10 root 15 -5 0 0 0 S 0.0 0.0 0:00.03 ksoftirqd/2 11 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/2 12 root RT -5 0 0 0 S 0.0 0.0 0:00.32 migration/3 13 root 15 -5 0 0 0 S 0.0 0.0 0:00.02 ksoftirqd/3 14 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/3 15 root RT -5 0 0 0 S 0.0 0.0 0:00.10 migration/4 16 root 15 -5 0 0 0 S 0.0 0.0 0:00.04 ksoftirqd/4 17 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/4 18 root RT -5 0 0 0 S 0.0 0.0 0:00.35 migration/5 We have also tried to run this command. What else command can help us diagnose the exact problem of this high load? netstat -nat |grep 3306 | awk '{print $6}' | sort | uniq -c | sort -n 1 LISTEN 1 SYN_RECV 410 ESTABLISHED 964 TIME_WAIT Output of vmstat 1: ---------------memory--------------- --swap-- --io-- --system-- -----cpu------ r b swpd free buff cache si so bi bo in cs us sy id wa st 2 0 0 12978936 30944 15172360 0 0 259 3 184 265 6 6 77 12 0

    Read the article

  • Get Professional SEO Service From SEO Experts

    SEO service is imperative to online business. Business today has gone digital. With its power to perform international promotion, almost every brand is on a bid to establish an online presence. However, it's not easy to gain online presence without proper SEO service.

    Read the article

  • Did the Community Lose It’s Focus, or Did I?

    - by Jonathan Kehayias
    Late Thursday night, ok it was actually very early Friday morning, I wrote a blog post that stirred a bit of a controversy in the community.  While the outcome of the discussion that was sparked by that post in the community has been good, it is definitely a case where the end isn’t justified by the means.   Hindsight is always 20/20, and while I stand by the point I was trying to make with that post, there are a number of ways I could have gone about making that point without risking...(read more)

    Read the article

  • Book: Confessions of a Public Speaker: Scott Berkun

    - by Greg Low
    It's probably apparent that I've been travelling again a lot lately as the number of posts related to books has gone up. One book that I picked up along the way and really enjoyed was Scott Berkun's Confessions of a Public Speaker . I could relate to so much of what Scott was talking about and there are quite a few solid nuggets of advice in the book. It's very important when you are regularly giving technical presentations to spend time learning about the "presenting" part of the task, not just...(read more)

    Read the article

  • Good Scoop: The PeopleSoft/IBM Backstory

    - by Brian Dayton
    Sometimes you're searching for something online and you find an unrelated, bonus nugget. Last week I stumbled across an interesting blog post from Chris Heller of a PeopleSoft consulting shop in San Ramon, CA called Grey Sparling. I don't know these guys. But Chris, who apparently used to work on the PeopleTools team, wrote a great article on a pre-acquisition, would-be deal between IBM and PeopleSoft that would have standardized PeopleSoft on IBM technology. The behind-the-scenes perspective is interesting. His commentary on the challenges that the company and PeopleSoft customers would have encountered if the deal had gone through was also interesting: ·         "No common ownership. It's hard enough to get large groups of people to work together when they work for the same company, but with two separate companies it is much, much harder. Even within Oracle, progress on Fusion applications was slow until Thomas Kurian took over Fusion applications in addition to Fusion middleware." ·         "No customer buy-in. PeopleSoft customers weren't asking for a conversion to WebSphere, so the fact that doing that could have helped PeopleSoft stay independent wouldn't have meant much to them, especially since the cost of moving to whatever a "PeopleSoft built on WebSphere" would have been significant." ·         "No executive buy-in. This is related to the previous point, but it's worth calling out separately. If Oracle had walked away and the deal with IBM had gone through, and PeopleSoft customers got put through the wringer as part of WebSphere move, all of the PeopleSoft project teams would be put in the awkward position of explaining to their management why these additional costs and headaches were happening. Essentially they would need to "sell" the partnership internally to their own management team. That's not a fun conversation to have." I'm not surprised that something like this was in the works. But I did find the inside scoop and Heller's perspective on the challenges particularly interesting. Especially the advantages of aligning development of applications and infrastructure development under one roof. Here's a link to the whole blog entry.  

    Read the article

  • Bill Gross of IdeaLab talks to Don Dodge about his incubator

    Bill Gross of IdeaLab talks to Don Dodge about his incubator Bill Gross has started nearly 100 companies, including Answers.com, CitySearch, Compete, eToys, GoTo.com, NetZero, Picasa, and Tickets.com. Thirty five of his companies have been acquired or gone public. IdeaLab currently has 25 companies active in the incubator. IdeaLab is a very different incubator. From: GoogleDevelopers Views: 492 23 ratings Time: 02:26 More in Science & Technology

    Read the article

  • My Thoughts On the Xbox 180

    - by Chris Gardner
    Originally posted on: http://geekswithblogs.net/freestylecoding/archive/2013/06/21/my-thoughts-on-the-xbox-180.aspx Everyone seems to be putting their 0.00237 cents into the wishing well over Microsoft's recent decision to reverse the DRM policy on the Xbox One. However, there have been a few issues that nobody has touched. As such, I have decided to dig 0.00237 cents out of my pocket. First, let me be clear about this point. I do not support the decision to reverse the DRM policy on the Xbox One. I wanted that point to be expressed first and unambiguously. I will say it again. I do not support the decision to reverse the DRM policy on the Xbox One. Now that I have that out of the way, let me go into my rationale. This decision removes most of the cool features that enticed me to pre-order the console. No, I didn't cancel my pre-order. There is still five months before the release of the console, and there is still a plethora of information that we, as consumers, do not have. With that, it should be noted that much of the talk in this post is speculation and rhetoric. I do not have any insider information that you do not possess. The persistent connection would have allowed the console to do many of the functions for which we have been begging. That demo where someone was playing Ryse, seamlessly accepted a multiplayer challenge in Killer Instinct, played the match (and a rematch,) and then jumped back into Ryse. That's gone, if you bought the game on disc. The new, DRM free system will require the disc in the system to play a game. That bullet point where one Xbox Live account could have up to 10 slave accounts so families could play together, no matter where they were located. That's gone as well. The promise of huge, expansive, dynamically changing worlds that was brought to us with the power of cloud computing. Well, "the people" didn't want there to be a forced, persistent connection. As such, developers can't rely on a connection and, as such, that feature is gone. This is akin to the removal of the hard drive on the Xbox 360. The list continues, but the enthusiast press has enumerated the list far better than I wish. All of this is because the Xbox team saw the HUGE success of Steam and decided to borrow a few ideas. Yes, Steam. The service that everyone hated for the first six months (for the same reasons the Xbox One is getting flack.) There was an initial growing pain. However, it is now lauded as the way games distribution should be handled. Unless you are Microsoft. I do find it curious that many of the features were originally announced for the PS4 during its unveiling. However, much of that was left strangely absent for Sony's E3 press conference. Instead, we received a single, static slide that basically said the exact opposite of Microsoft's plans. It is not farfetched to believe that slide came into existence during the approximately seven hours between the two media briefings. The thing that majorly annoys me over this whole kerfuffle is that the single thing that caused the call to arms is, really, not an issue. Microsoft never said they were going to block used sales. They said it was up to the publisher to make that decision. This would have allowed publishers to reclaim some of the costs of development in subsequent sales of the product. If you sell your game to GameStop for 7 USD, GameStop is going to sell it for 55 USD. That is 48 USD pure profit for them. Some publishers asked GameStop for a small cut. Was this a huge, money grubbing scheme? Well, yes, but the idea was that they have to handle server infrastructure for dormant accounts, etc. Of course, GameStop flatly refused, and the Online Pass was born. Fortunately, this trend didn’t last, and most publishers have stopped the practice. The ability to sell "licenses" has already begun to be challenged. Are you living in the EU? If so, companies must allow you to sell digital property. With this precedent in place, it's only a matter of time before other areas follow suit. If GameStop were smart, they should have immediately contacted every publisher out there to get the rights to become a clearing house for these licenses. Then, they keep their business model and could reduce their brick and mortar footprint. The digital landscape is changing. We need to not block this process. As Seth MacFarlane best said "Some issues are so important that you should drag people kicking and screaming." I believe this was said on an episode of Real Time with Bill Maher about the issue of Gay Marriages. Much like the original source, this is an issue that we need to drag people to the correct, progressive position. Microsoft, as a company, actually has the resources to weather the transition period. They have a great pool of first and second party developers that can leverage this new framework to prove the validity. Over time, the third party developers will get excited to use these tools. As an old C++ guy, I resisted C# for years. Now, I think it's one of the best languages I've ever used. I have a server room and a Co-Lo full of servers, so I originally didn't see the value in Azure. Now, I wish I could move every one of my projects into the cloud. I still LOVE getting physical packaging, which my music and games collection will proudly attest. However, I have started to see the value in pure digital, and have found ways to integrate this into the ways I consume those products. I can, honestly, understand how some parts of the population would be very apprehensive about this new landscape. There were valid arguments about people with no internet access. There are ways to combat these problems. These methods do not require us to throw the baby out with the bathwater. However, the number of people in the computer industry that I have seen cry foul is truly appalling. We are the forward looking people that help show how technology can improve people's lives. If we can't see the value of the brief pain involved with an exciting new ecosystem, than who will?

    Read the article

  • Pure Front end JavaScript with Web API versus MVC views with ajax

    - by eyeballpaul
    This was more a discussion for what peoples thoughts are these days on how to split a web application. I am used to creating an MVC application with all its views and controllers. I would normally create a full view and pass this back to the browser on a full page request, unless there were specific areas that I did not want to populate straight away and would then use DOM page load events to call the server to load other areas using AJAX. Also, when it came to partial page refreshing, I would call an MVC action method which would return the HTML fragment which I could then use to populate parts of the page. This would be for areas that I did not want to slow down initial page load, or areas that fitted better with AJAX calls. One example would be for table paging. If you want to move on to the next page, I would prefer it if an AJAX call got that info rather than using a full page refresh. But the AJAX call would still return an HTML fragment. My question is. Are my thoughts on this archaic because I come from a .net background rather than a pure front end background? An intelligent front end developer that I work with, prefers to do more or less nothing in the MVC views, and would rather do everything on the front end. Right down to web API calls populating the page. So that rather than calling an MVC action method, which returns HTML, he would prefer to return a standard object and use javascript to create all the elements of the page. The front end developer way means that any benefits that I normally get with MVC model validation, including client side validation, would be gone. It also means that any benefits that I get with creating the views, with strongly typed html templates etc would be gone. I believe this would mean I would need to write the same validation for front end and back end validation. The javascript would also need to have lots of methods for creating all the different parts of the DOM. For example, when adding a new row to a table, I would normally use the MVC partial view for creating the row, and then return this as part of the AJAX call, which then gets injected into the table. By using a pure front end way, the javascript would would take in an object (for, say, a product) for the row from the api call, and then create a row from that object. Creating each individual part of the table row. The website in question will have lots of different areas, from administration, forms, product searching etc. A website that I don't think requires to be architected in a single page application way. What are everyone's thoughts on this? I am interested to hear from front end devs and back end devs.

    Read the article

  • Social Targeting: Who Do You Think You’re Talking To?

    - by Mike Stiles
    Are you the kind of person that tries to sell Clay Aiken CD’s outside Warped Tour concert venues? Then you don’t think a lot about targeting your messages to the right audience. For your communication to pack the biggest punch it can, you need to know where to throw it. And a recent study on social demographics might help you see social targeting in a whole new light. Pingdom’s annual survey of social network demographics shows us first of all that there is no gender difference between Facebook and Twitter. Both are 40% male, 60% female. If you’re looking for locales that lean heavily male, that would be Slashdot, Hacker News and Stack Overflow. The women are dominating Pinterest, Goodreads and Blogger. So what about age? 55% of tweeters are 35 and up, compared with 63% at Pinterest, 65% at Facebook and 70% at LinkedIn. As you can tell, LinkedIn supports the oldest user base, with the average member being 44. The average age at Facebook is 51, and it’s 37 at Twitter. If you want to aim younger, have you met Orkut yet? 83% of its users are under 35. The next sites in order as great candidates for the young market are deviantART, Hacker News, Hi5, Github, and Reddit. I know, other than Reddit, many of you might be saying “who?” But the list could offer an opportunity to look at the vast social world beyond Facebook, Twitter and Google+ (which Pingdom did not include in the survey at all due to a lack of accessible data). As for the average age of social users overall: 26% are 25-34 25% are 35-44 19% are 45-54 16% are 18-24  6% are 55-64  5% are 0-17  and 2% are 65 Now you know where you stand on the “cutting edge” scale for a person your age. You’re welcome. Certainly such demographics are a moving target and need to be watched and reassessed on a regular basis to make sure you’re moving in step with the people you want to talk to. For instance, since Pingdom’s survey last year, the age of the average Facebook user has gone up 2 years, while the age of the average Twitter user has gone down 2 years. With the targeting and analytics tools available on today’s social management platforms, there’s little need to market in the dark. Otherwise, good luck with those Clay CD’s.

    Read the article

  • Before the Summit of 2012

    - by Ajarn Mark Caldwell
    Today, Monday, was the first day of the PASS Summit Preconference training events, but instead I spent the day at the free SQL in the City event put on by Red Gate. For me this was not a financial decision (pre-con sessions cost extra above the general Summit registration) but rather a matter of interest.  I had already included money for pre-cons in this year’s training budget, but none of them really stood out to me, so even if the Red-Gate event were not going on at the same time, I probably would not have gone to any pre-cons this year.  However, the topics being presented at the SQL in the City event were of great interest to me.  There promised to be good information on Continuous Integration and automated deployment of database changes, which lately has been a real hot topic at my work.  And indeed, Red-Gate announced the release of a new tool (still in Early Access Program…a.k.a. Beta) which is called the Deployment Manager.  Since we are in the middle of a TFS implementation project, it will be interesting to see how this plays out and compares to what we put together with the automated builds in TFS.  But, as I understand it, the primary focus of Deployment Manager is not to be the Build process (Red Gate uses JetBrains’ Team City for that in their shop) but rather to aid in the deployment of those build packages, as well as providing easy rollback and a good visualization of which versions of software are in which environments.  It looks promising and I’ve already downloaded the installer package to play with it later. Overall, I was quite impressed with the SQL in the City event.  Having heard many current and past members of the PASS Board of Directors describe the challenges of putting on a large conference, and the growing pains that the PASS Summit has gone through, I am even more impressed that the Red Gate event ran as smoothly as it did.  And it is quite impressive the amount of money that Red Gate must have spent given that this was a no-charge event to attend, they had a very nice hot lunch, and the after-event drinks celebration.  Well done, folks! Of course it was great to hear from a variety of speakers.  Today I listened to some folks from Red Gate like Grant Fritchey (blog | @GFritchey) and David Atkinson (Product Manager for SQL Source Control and now the Deployment Manager tool set); and also Brent Ozar (blog | @BrentO) and Buck Woody (blog | @BuckWoody).  By the way, if you have never seen either Brent or Buck speak, you really should.  Different styles, but both are very entertaining and educational at the same time.  I love Buck’s sense of humor (here’s a tip…don’t be late to Buck’s session or you’ll become part of the presentation) and I praise Brent’s slides.  Brent’s style very much reminds me of that espoused by Garr Reynolds on his Presentation Zen blog (and book) and I am impressed that he can make a technical presentation so engaging. It was a great day, a great way to kick off the week, and I am excited to get into the full Summit!

    Read the article

  • Say goodbye to System.Reflection.Emit (any dynamic proxy generation) in WinRT

    - by mbrit
    tl;dr - Forget any form of dynamic code emitting in Metro-style. It's not going to happen.Over the past week or so I've been trying to get Moq (the popular open source TDD mocking framework) to work on WinRT. Irritatingly, the day before Release Preview was released it was actually working on Consumer Preview. However in Release Preview (RP) the System.Reflection.Emit namespace is gone. Forget any form of dynamic code generation and/or MSIL injection.This kills off any project based on the popular Castle Project Dynamic Proxy component, of which Moq is one example. You can at this point in time not perform any form of mocking using dynamic injection in your Metro-style unit testing endeavours.So let me take you through my journey on this, so that other's don't have to...The headline fact is that you cannot load any assembly that you create at runtime. WinRT supports one Assembly.Load method, and that takes the name of an assembly. That has to be placed within the deployment folder of your app. You cannot give it a filename, or stream. The methods are there, but private. Try to invoke them using Reflection and you'll be met with a caspol exception.You can, in theory, use Rotor to replace SRE. It's all there, but again, you can't load anything you create.You can't write to your deployment folder from within your Metro-style app. But, can you use another service on the machine to move a file that you create into the deployment folder and load it? Not really.The networking stack in Metro-style is intentionally "damaged" to prevent socket communication from Metro-style to any end-point on the local machine. (It just times out.) This militates against an approach where your Metro-style app can signal a properly installed service on the machine to create proxies on its behalf. If you wanted to do this, you'd have to route the calls through a C&C server somewhere. The reason why Microsoft has done this is obvious - taking out SRE know means they don't have to do it in an emergency later. The collateral damage in removing SRE is that you can't do mocking in test mode, but you also can't do any form of injection in production mode. There are plenty of reasons why enterprise apps might want to do this last point particularly. At CP, the assumption was that their inspection tools would prevent SRE being used as a malware vector - it now seems they are less confident about that. (For clarity, the risk here is in allowing a nefarious program to download instructions from a C&C server and make up executable code on the fly to run, getting around the marketplace restrictions.)So, two things:- System.Reflection.Emit is gone in Metro-style/WinRT. Get over it - dynamic, on-the-fly code generation is not going to to happen.- I've more or less got a version of Moq working in Metro-style. This is based on the idea of "baking" the dynamic proxies before you use them. You can find more information here: https://github.com/mbrit/moqrt

    Read the article

  • Taking 10 minutes to boot up!

    - by oshirowanen
    Just added 2 pci-e to ide cards in my computer so I can use 2 old ide hard drives. Everything works fine, except the OS booting time has gone from about 10 seconds to about 10 minutes... I'f I remove both cards, it takes about 10 seconds to boot up, if I add either 1 of the cards back in, it still takes 10 seconds to boot up, but as soon as I have both cards in, it takes about 10 minutes. Why would this be happening?

    Read the article

  • How do I fix my theme?

    - by SepiDev
    After installing and using Ubuntu 11.10 for while I decided to install gnome shell. After Installing and rebooting my system I saw that Ubuntu light theme (ambiance/radiance) is gone. I check /usr/shar/themes and /usr/shar/icons, it seems that the light themes and mono icons exist. I even reinstalled gtk3-engine-unico package but none of these effort fixed my problem :( my desktop now looks like this: What should I do to get my default ubuntu theme back?

    Read the article

  • What constitutes a "substantial, good-faith effort to remove the links"

    - by Luke McCallum
    We engaged the services of a 3rd party SEO consultant to assist us in managing our Meta data and to write regular blogs on our site http://cyberdesignworks.com.au Without our authorisation, the SEO also ran a link building campaign which has seen us Penguin slapped and we no longer appear in Google for a number of our core keywords. Since notification by Google that we have "unnatural links" back in March we have undertaken a significant campaign to rid ourselves of these dodgy backlinks by a number of methods. I have just received feedback on my 4th or 5th resubmission which is still advising that we need to make a "substantial, good-faith effort to remove the links" before Google will reconsider us for inclusion. After the effort that I have gone through to get links removed, I am now at a loss as to what else I can do to demonstrate "substantial, good-faith effort to remove the links". Below is a summary of the actions that we have taken to date. According to http://removem.com we had about 5584 back-linking domains. Of those we have successfully contacted and had removed links from 344 domains We ignored links from 625 domains as they were either legitimate press releases, natural backlinks or client websites containing an attribution link in the footer that points back to us. Due to our efforts, or the sites simply becoming defunct, removem.com reports that links from 3262 domains have been removed. We have contacted but are yet to receive feedback from 1666 domains so we can assume that the backlinks remain. We have configured an automatic 301 redirect for each of the links from these 1666 domains to point to http://redirects.sanscode.com/ which we are calling our Bad Link Catcher (a stroke of genius I thought). i.e http://www.mysimplewebdesign.com/create-a-perfect-webpage-with-four-important-tips-from-sydney-web-development-service-companies.php As we are a web design agency, we have a large number of client websites which contain an attribution link in their footer which points back to us. We have gone through the vast majority of these and updated these links to replace anchor text with an image and rel="nofollow" link. i.e <a rel="nofollow" target="_blank" href="http://www.cyberdesignworks.com.au/"><img src="https://sessions.sanscode.com/site/assets/media/badges/Badge_CDW_SANSCODE.png"></a> See http://www.milkatwork.com.au/ An export from http://removem.com detailing the number of times we have contacted each link and whether it is still found or not was also supplied with each resubmission. The total back links reported in Google Web Master Tools has dropped from over 100K to 87K and I expect it to drop significantly lower once Google re-crawls each back-linking page. Based on all of the above, I am not sure what else I can do to to demonstrate a "substantial, good-faith effort to remove the links". I would sincerely appreciate any feedback or suggestions that you may have as I am out of ideas.

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >