Search Results

Search found 7529 results on 302 pages for 'replace'.

Page 105/302 | < Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >

  • GNOME panel crash

    - by josh
    when trying to log in using gnome-classic on ubuntu 11.10 gnome-panel crashes. I can still see the desktop and i can open applications via terminal but when i try to run gnome-panel using "gnome-panel" or "gnome-panel --replace" it crashes with this error: (gnome-panel:9694): Gtk-CRITICAL **: gtk_style_context_get: assertion `priv->widget_path != NULL' failed (gnome-panel:9694): Gtk-CRITICAL **: gtk_style_context_get: assertion `priv->widget_path != NULL' failed (gnome-panel:9694): GLib-GObject-WARNING **: invalid uninstantiatable type `(null)' in cast to `PanelWidget' gnome-panel: /build/buildd/cairo-1.10.2/src/cairo-pattern.c:764: cairo_pattern_reference: Assertion `((*&(&pattern->ref_count)->ref_count) > 0)' failed. Aborted

    Read the article

  • Anti-Forgery Request in ASP.NET MVC and AJAX

    - by Dixin
    Background To secure websites from cross-site request forgery (CSRF, or XSRF) attack, ASP.NET MVC provides an excellent mechanism: The server prints tokens to cookie and inside the form; When the form is submitted to server, token in cookie and token inside the form are sent by the HTTP request; Server validates the tokens. To print tokens to browser, just invoke HtmlHelper.AntiForgeryToken():<% using (Html.BeginForm()) { %> <%: this.Html.AntiForgeryToken(Constants.AntiForgeryTokenSalt)%> <%-- Other fields. --%> <input type="submit" value="Submit" /> <% } %> which writes to token to the form:<form action="..." method="post"> <input name="__RequestVerificationToken" type="hidden" value="J56khgCvbE3bVcsCSZkNVuH9Cclm9SSIT/ywruFsXEgmV8CL2eW5C/gGsQUf/YuP" /> <!-- Other fields. --> <input type="submit" value="Submit" /> </form> and the cookie: __RequestVerificationToken_Lw__=J56khgCvbE3bVcsCSZkNVuH9Cclm9SSIT/ywruFsXEgmV8CL2eW5C/gGsQUf/YuP When the above form is submitted, they are both sent to server. [ValidateAntiForgeryToken] attribute is used to specify the controllers or actions to validate them:[HttpPost] [ValidateAntiForgeryToken(Salt = Constants.AntiForgeryTokenSalt)] public ActionResult Action(/* ... */) { // ... } This is very productive for form scenarios. But recently, when resolving security vulnerabilities for Web products, I encountered 2 problems: It is expected to add [ValidateAntiForgeryToken] to each controller, but actually I have to add it for each POST actions, which is a little crazy; After anti-forgery validation is turned on for server side, AJAX POST requests will consistently fail. Specify validation on controller (not on each action) Problem For the first problem, usually a controller contains actions for both HTTP GET and HTTP POST requests, and usually validations are expected for HTTP POST requests. So, if the [ValidateAntiForgeryToken] is declared on the controller, the HTTP GET requests become always invalid:[ValidateAntiForgeryToken(Salt = Constants.AntiForgeryTokenSalt)] public class SomeController : Controller { [HttpGet] public ActionResult Index() // Index page cannot work at all. { // ... } [HttpPost] public ActionResult PostAction1(/* ... */) { // ... } [HttpPost] public ActionResult PostAction2(/* ... */) { // ... } // ... } If user sends a HTTP GET request from a link: http://Site/Some/Index, validation definitely fails, because no token is provided. So the result is, [ValidateAntiForgeryToken] attribute must be distributed to each HTTP POST action in the application:public class SomeController : Controller { [HttpGet] public ActionResult Index() // Works. { // ... } [HttpPost] [ValidateAntiForgeryToken(Salt = Constants.AntiForgeryTokenSalt)] public ActionResult PostAction1(/* ... */) { // ... } [HttpPost] [ValidateAntiForgeryToken(Salt = Constants.AntiForgeryTokenSalt)] public ActionResult PostAction2(/* ... */) { // ... } // ... } Solution To avoid a large number of [ValidateAntiForgeryToken] attributes (one attribute for one HTTP POST action), I created a wrapper class of ValidateAntiForgeryTokenAttribute, where HTTP verbs can be specified:[AttributeUsage(AttributeTargets.Class | AttributeTargets.Method, AllowMultiple = false, Inherited = true)] public class ValidateAntiForgeryTokenWrapperAttribute : FilterAttribute, IAuthorizationFilter { private readonly ValidateAntiForgeryTokenAttribute _validator; private readonly AcceptVerbsAttribute _verbs; public ValidateAntiForgeryTokenWrapperAttribute(HttpVerbs verbs) : this(verbs, null) { } public ValidateAntiForgeryTokenWrapperAttribute(HttpVerbs verbs, string salt) { this._verbs = new AcceptVerbsAttribute(verbs); this._validator = new ValidateAntiForgeryTokenAttribute() { Salt = salt }; } public void OnAuthorization(AuthorizationContext filterContext) { string httpMethodOverride = filterContext.HttpContext.Request.GetHttpMethodOverride(); if (this._verbs.Verbs.Contains(httpMethodOverride, StringComparer.OrdinalIgnoreCase)) { this._validator.OnAuthorization(filterContext); } } } When this attribute is declared on controller, only HTTP requests with the specified verbs are validated:[ValidateAntiForgeryTokenWrapper(HttpVerbs.Post, Constants.AntiForgeryTokenSalt)] public class SomeController : Controller { // Actions for HTTP GET requests are not affected. // Only HTTP POST requests are validated. } Now one single attribute on controller turns on validation for all HTTP POST actions. Submit token via AJAX Problem For AJAX scenarios, when request is sent by JavaScript instead of form:$.post(url, { productName: "Tofu", categoryId: 1 // Token is not posted. }, callback); This kind of AJAX POST requests will always be invalid, because server side code cannot see the token in the posted data. Solution The token must be printed to browser then submitted back to server. So first of all, HtmlHelper.AntiForgeryToken() must be called in the page where the AJAX POST will be sent. Then jQuery must find the printed token in the page, and post it:$.post(url, { productName: "Tofu", categoryId: 1, __RequestVerificationToken: getToken() // Token is posted. }, callback); To be reusable, this can be encapsulated in a tiny jQuery plugin:(function ($) { $.getAntiForgeryToken = function () { // HtmlHelper.AntiForgeryToken() must be invoked to print the token. return $("input[type='hidden'][name='__RequestVerificationToken']").val(); }; var addToken = function (data) { // Converts data if not already a string. if (data && typeof data !== "string") { data = $.param(data); } data = data ? data + "&" : ""; return data + "__RequestVerificationToken=" + encodeURIComponent($.getAntiForgeryToken()); }; $.postAntiForgery = function (url, data, callback, type) { return $.post(url, addToken(data), callback, type); }; $.ajaxAntiForgery = function (settings) { settings.data = addToken(settings.data); return $.ajax(settings); }; })(jQuery); Then in the application just replace $.post() invocation with $.postAntiForgery(), and replace $.ajax() instead of $.ajaxAntiForgery():$.postAntiForgery(url, { productName: "Tofu", categoryId: 1 }, callback); // Token is posted. This solution looks hard coded and stupid. If you have more elegant solution, please do tell me.

    Read the article

  • Which LAN card / module combinations proven to work with Wake on LAN

    - by pablomo
    I've got a 12.04 headless server that I've been trying to get to work with wake-on-lan. The card is Marvel 88E8053 using the sky2 module. Although WOL is enabled in BIOS and ethtool shows the card as WOL enabled, it refuses to wake when I send the magic packet. I have verified that the packet is being received OK when the machine is on. The machine does wake OK from a BIOS alarm which suggests it is a network card issue. I've seen reference to bugs in sky2 that mean WOL fails in recent versions of Ubuntu (and have tried a module conf file as suggested here but to no avail) So I am thinking the best bet is to replace the ethernet card with one that definitely works with WOL in 12.04 - please could you post your card make and model no if you are using it successfully?

    Read the article

  • The ugly evolution of running a background operation in the context of an ASP.NET app

    - by Jeff
    If you’re one of the two people who has followed my blog for many years, you know that I’ve been going at POP Forums now for over almost 15 years. Publishing it as an open source app has been a big help because it helps me understand how people want to use it, and having it translated to six languages is pretty sweet. Despite this warm and fuzzy group hug, there has been an ugly hack hiding in there for years. One of the things we find ourselves wanting to do is hide some kind of regular process inside of an ASP.NET application that runs periodically. The motivation for this has always been that a lot of people simply don’t have a choice, because they’re running the app on shared hosting, or don’t otherwise have access to a box that can run some kind of regular background service. In POP Forums, I “solved” this problem years ago by hiding some static timers in an HttpModule. Truthfully, this works well as long as you don’t run multiple instances of the app, which in the cloud world, is always a possibility. With the arrival of WebJobs in Azure, I’m going to solve this problem. This post isn’t about that. The other little hacky problem that I “solved” was spawning a background thread to queue emails to subscribed users of the forum. This evolved quite a bit over the years, starting with a long running page to mail users in real-time, when I had only a few hundred. By the time it got into the thousands, or tens of thousands, I needed a better way. What I did is launched a new thread that read all of the user data in, then wrote a queued email to the database (as in, the entire body of the email, every time), with the properly formatted opt-out link. It was super inefficient, but it worked. Then I moved my biggest site using it, CoasterBuzz, to an Azure Website, and it stopped working. So let’s start with the first stupid thing I was doing. The new thread was simply created with delegate code inline. As best I can tell, Azure Websites are more aggressive about garbage collection, because that thread didn’t queue even one message. When the calling server response went out of scope, so went the magic background thread. Duh, all I had to do was move the thread to a private static variable in the class. That’s the way I was able to keep stuff running from the HttpModule. (And yes, I know this is still prone to failure, particularly if the app recycles. For as infrequently as it’s used, I have not, however, experienced this.) It was still failing, but this time I wasn’t sure why. It would queue a few dozen messages, then die. Running in Azure, I had to turn on the application logging and FTP in to see what was going on. That led me to a helper method I was using as delegate to build the unsubscribe links. The idea here is that I didn’t want yet another config entry to describe the base URL, appended with the right path that would match the routing table. No, I wanted the app to figure it out for you, so I came up with this little thing: public static string FullUrlHelper(this Controller controller, string actionName, string controllerName, object routeValues = null) { var helper = new UrlHelper(controller.Request.RequestContext); var requestUrl = controller.Request.Url; if (requestUrl == null) return String.Empty; var url = requestUrl.Scheme + "://"; url += requestUrl.Host; url += (requestUrl.Port != 80 ? ":" + requestUrl.Port : ""); url += helper.Action(actionName, controllerName, routeValues); return url; } And yes, that should have been done with a string builder. This is useful for sending out the email verification messages, too. As clever as I thought I was with this, I was using a delegate in the admin controller to format these unsubscribe links for tens of thousands of users. I passed that delegate into a service class that did the email work: Func<User, string> unsubscribeLinkGenerator = user => this.FullUrlHelper("Unsubscribe", AccountController.Name, new { id = user.UserID, key = _profileService.GetUnsubscribeHash(user) }); _mailingListService.MailUsers(subject, body, htmlBody, unsubscribeLinkGenerator); Cool, right? Actually, not so much. If you look back at the helper, this delegate then will depend on the controller context to learn the routing and format for the URL. As you might have guessed, those things were turning null after a few dozen formatted links, when the original request to the admin controller went away. That this wasn’t already happening on my dedicated server is surprising, but again, I understand why the Azure environment might be eager to reclaim a thread after servicing the request. It’s already inefficient that I’m building the entire email for every user, but going back to check the routing table for the right link every time isn’t a win either. I put together a little hack to look up one generic URL, and use that as the basis for a string format. If you’re wondering why I didn’t just use the curly braces up front, it’s because they get URL formatted: var baseString = this.FullUrlHelper("Unsubscribe", AccountController.Name, new { id = "--id--", key = "--key--" }); baseString = baseString.Replace("--id--", "{0}").Replace("--key--", "{1}"); Func unsubscribeLinkGenerator = user => String.Format(baseString, user.UserID, _profileService.GetUnsubscribeHash(user)); _mailingListService.MailUsers(subject, body, htmlBody, unsubscribeLinkGenerator); And wouldn’t you know it, the new solution works just fine. It’s still kind of hacky and inefficient, but it will work until this somehow breaks too.

    Read the article

  • Upgrading to latest stable Mono

    - by Oli
    Mono 2.8 was recently released boasting a couple of large performance improvements. It's far too late for it to make it into Maverick and I'm fairly inpatient. I don't use Mono for anything mission-critical (just playing music and sorting photos) and if it breaks everything related to Mono, I can probably either live with it or fix it. I'm aware of how much I stand to lose if I mess things up. So with that acknowledged, does anybody here know how to build Mono in a way where it could be dropped in to replace the current Mono (2.6.7)? By this I mean ideally mirroring the packages that Ubuntu uses so that if the worse does happen, I can just downgrade the packages. Or is there a PPA that does all this for me?

    Read the article

  • Simple-Talk development: a quick history lesson

    - by Michael Williamson
    Up until a few months ago, Simple-Talk ran on a pure .NET stack, with IIS as the web server and SQL Server as the database. Unfortunately, the platform for the site hadn’t quite gotten the love and attention it deserved. On the one hand, in the words of our esteemed editor Tony “I’d consider the current platform to be a “success”; it cost $10K, has lasted for 6 years, was finished, end to end in 6 months, and although we moan about it has got us quite a long way.” On the other hand, it was becoming increasingly clear that it needed some serious work. Among other issues, we had authors that wouldn’t blog because our current blogging platform, Community Server, was too painful for them to use. Forgetting about Simple-Talk for a moment, if you ask somebody what blogging platform they’d choose, the odds are they’d say WordPress. Regardless of its technical merits, it’s probably the most popular blogging platform, and it certainly seemed easier to use than Community Server. The issue was that WordPress is normally hosted on a Linux stack running PHP, Apache and MySQL — quite a difference from our Microsoft technology stack. We certainly didn’t want to rewrite the entire site — we just wanted a better blogging platform, with the rest of the existing, legacy site left as is. At a very high level, Simple-Talk’s technical design was originally very straightforward: when your browser sends an HTTP request to Simple-Talk, IIS (the web server) takes the request, does some work, and sends back a response. In order to keep the legacy site running, except with WordPress running the blogs, a different design is called for. We now use nginx as a reverse-proxy, which can then delegate requests to the appropriate application: So, when your browser sends a request to Simple-Talk, nginx takes that request and checks which part of the site you’re trying to access. Most of the time, it just passes the request along to IIS, which can then respond in much the same way it always has. However, if your request is for the blogs, then nginx delegates the request to WordPress. Unfortunately, as simple as that diagram looks, it hides an awful lot of complexity. In particular, the legacy site running on IIS was made up of four .NET applications. I’ve already mentioned one of these applications, Community Server, which handled the old blogs as well as managing membership and the forums. We have a couple of other applications to manage both our newsletters and our articles, and our own custom application to do some of the rendering on the site, such as the front page and the articles. When I say that it was made up of four .NET applications, this might conjure up an image in your mind of how they fit together: You might imagine four .NET applications, each with their own database, communicating over well-defined APIs. Sadly, reality was a little disappointing: We had four .NET applications that all ran on the same database. Worse still, there were many queries that happily joined across tables from multiple applications, meaning that each application was heavily dependent on the exact data schema that each other application used. Add to this that many of the queries were at least dozens of lines long, and practically identical to other queries except in a few key spots, and we can see that attempting to replace one component of the system would be more than a little tricky. However, the problems with the old system do give us a good place to start thinking about desirable qualities from any changes to the platform. Specifically: Maintainability — the tight coupling between each .NET application made it difficult to update any one application without also having to make changes elsewhere Replaceability — the tight coupling also meant that replacing one component wouldn’t be straightforward, especially if it wasn’t on a similar Microsoft stack. We’d like to be able to replace different parts without having to modify the existing codebase extensively Reusability — we’d like to be able to combine the different pieces of the system in different ways for different sites Repeatable deployments — rather than having to deploy the site manually with a long list of instructions, we should be able to deploy the entire site with a single command, allowing you to create a new instance of the site easily whether on production, staging servers, test servers or your own local machine Testability — if we can deploy the site with a single command, and each part of the site is no longer dependent on the specifics of how every other part of the site works, we can begin to run automated tests against the site, and against individual parts, both to prevent regressions and to do a little test-driven development In the next part, I’ll describe the high-level architecture we now have that hopefully brings us a little closer to these five traits.

    Read the article

  • How to setup Automount/Autofs

    - by matt wilkie
    I've followed the ubuntu help docs for setting up NFSv4 on a server running Ubuntu 10.4LTS and now I'm trying to get Autofs (on ubuntu 10.10) to mount the exports, following these instructions. So far it doesn't work. Where the docs say server -fstype=nfs4 server:/ I'm supposed to replace 'server' with my server's hostname right? If yes, should that be server-foo or server-foo.local? # Sample /etc/auto.master file # --- comments snipped --8<-- +auto.master # pre-existing /nfs /etc/auto.nfs # added by me . # manually created /etc/auto.nfs ubuntu-server.local -fstype=nfs4 ubuntu-server.local:/ ls /nfs/ubuntu-server /nfs/ubuntu-server.local shows nothing. What's the next troubleshooting step?

    Read the article

  • NoSQL is not about object databases

    NoSQL as a movement is an interesting beast. I kinda like that its negatively defined (I happen to belong myself to at least one other such a-community). Its not in its roots about proposing one specific new silver bullet to kill an old problem. its about challenging the consensus. Actually, blindly and systematically replacing relational databases with object databases would just replace one set of issues with another. No, the point is to recognize that relational databases are not a universal...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Unity 3D hangs after power blackout

    - by Yusuf
    Unity 3D was working just fine till we had a blackout in the area (I have not yet bought a UPS :(). I tried the solutions here, but it does not help. I tried issuing a unity --reset and it gives me this output: http://pastebin.com/UW83w5cG What seems to be important (to me) from that log is this part: (compiz:6510): GConf-CRITICAL **: gconf_client_add_dir: assertion `gconf_valid_key (dirname, NULL)' failed ERROR 2012-05-29 21:50:41 unity.launcher.trashlaunchericon TrashLauncherIcon.cpp:62 Could not create file monitor for trash uri: Operation not supported For the time being, I can use Unity-2D, but I would like the 3D to work as well. Can you please help? Edit: Output when issuing $ unity --replace: http://pastebin.com/sBCPbyrT (compiz:2811): GConf-CRITICAL **: gconf_client_add_dir: assertion `gconf_valid_key (dirname, NULL)' failed ERROR 2012-06-01 20:21:24 unity.launcher.trashlaunchericon TrashLauncherIcon.cpp:62 Could not create file monitor for trash uri: Operation not supported Initializing unityshell options...done WARN 2012-06-01 20:21:25 unity.screeneffectframebufferobject ScreenEffectFramebufferObject.cpp:167 unsupported internal format Segmentation fault (core dumped)

    Read the article

  • True Excel Templates for BI Publisher

    - by Annemarie Provisero
    ADVISOR WEBCAST: True Excel Templates for BI Publisher PRODUCT FAMILY: EBS/ATG/BI Publisher  July 12, 2011 at 7am PT, 8 am MT, 10 am ET This one-hour session is recommended for technical and functional users who want to learn how to code Excel formatted layouts for use with BI Publisher to generate binary Excel output. TOPICS WILL INCLUDE: Creating a simple template Formatting Dates Creating Functions A short, live demonstration (only if applicable) and question and answer period will be included. Oracle Advisor Webcasts are dedicated to building your awareness around our products and services. This session does not replace offerings from Oracle Global Support Services. Click here to register for this session ------------------------------------------------------------------------------------------------------------- The above webcast is a service of the E-Business Suite Communities in My Oracle Support. For more information on other webcasts, please reference the Oracle Advisor Webcast Schedule.Click here to visit the E-Business Communities in My Oracle Support Note that all links require access to My Oracle Support.

    Read the article

  • Url Navigation

    - by russ.bishop
    One of the new features is URL-based navigation which is useful for creating intranet links or auto-generating email links (such as from workflow systems, etc). For IIS 6 and earlier, the format is as follows: http://machine/drm-client/Logon.aspx? app=<appname>&action=go&ver=<version name>&hier=<hier name>&node=<node name> Just replace the fields with their appropriate values (URL-encoded of course). <node name> is optional. If provided it will open the hierarchy and expand directly to the target node. Otherwise the hierarchy is opened to the top node. Note that if the specified version is not loaded it will be loaded automatically.

    Read the article

  • Oracle Demantra BAL Migration

    - by Annemarie Provisero
    ADVISOR WEBCAST: Oracle Demantra BAL Migration PRODUCT FAMILY:  Manufacturing December 7, 2011 at 8 am PT, 9 am MT, 11 am ET This one-hour live advisor webcast provides Application Consultants, Administrative and Technical Users, and Customers with and introduction to the BAL Migration utility and details steps involved in migration process. The information provided trains you on leveraging BAL migration utility to migrate Demantra application objects from one environment to another. TOPICS WILL INCLUDE: Introduction to BAL Migration BAL Migration Steps Schema Setups BAL Repository Object Migration A short, live demonstration (only if applicable) and question and answer period will be included. Oracle Advisor Webcasts are dedicated to building your awareness around our products and services. This session does not replace offerings from Oracle Global Support Services. Click here to register for this session ------------------------------------------------------------------------------------------------------------- The above webcast is a service of the E-Business Suite Communities in My Oracle Support. For more information on other webcasts, please reference the Oracle Advisor Webcast Schedule.Click here to visit the E-Business Communities in My Oracle Support Note that all links require access to My Oracle Support.

    Read the article

  • How to perform regular expression based replacements on files with MSBuild

    - by Daniel Cazzulino
    And without a custom DLL with a task, too . The example at the bottom of the MSDN page on MSBuild Inline Tasks already provides pretty much all you need for that with a TokenReplace task that receives a file path, a token and a replacement and uses string.Replace with that. Similar in spirit but way more useful in its implementation is the RegexTransform in NuGet’s Build.tasks. It’s much better not only because it supports full regular expressions, but also because it receives items, which makes it very amenable to batching (applying the transforms to multiple items). You can read about how to use it for updating assemblies with a version number, for example. I recently had a need to also supply RegexOptions to the task so I extended the metadata and a little bit of the inline task so that it can parse the optional flags. So when using the task, I can pass the flags as item metadata as follows:...Read full article

    Read the article

  • Minecraft shows black screen on watt-os 64 after logon

    - by uffe hellum
    Minecraft appears to launch with oracle java 7, but crashes after logon. $ java -Xmx1024M -Xms512M -cp ./minecraft.jar net.minecraft.LauncherFrame asdf Exception in thread "Thread-3" java.lang.UnsatisfiedLinkError: /home/uffeh/.minecraft/bin/natives/liblwjgl.so: /home/uffeh/.minecraft/bin/natives/liblwjgl.so: wrong ELF class: ELFCLASS32 (Possible cause: architecture word width mismatch) at java.lang.ClassLoader$NativeLibrary.load(Native Method) at java.lang.ClassLoader.loadLibrary1(ClassLoader.java:1939) at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1864) at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1825) at java.lang.Runtime.load0(Runtime.java:792) at java.lang.System.load(System.java:1059) at org.lwjgl.Sys$1.run(Sys.java:69) at java.security.AccessController.doPrivileged(Native Method) at org.lwjgl.Sys.doLoadLibrary(Sys.java:65) at org.lwjgl.Sys.loadLibrary(Sys.java:81) at org.lwjgl.Sys.(Sys.java:98) at net.minecraft.client.Minecraft.F(SourceFile:1857) at aof.(SourceFile:20) at net.minecraft.client.Minecraft.(SourceFile:77) at anw.(SourceFile:36) at net.minecraft.client.MinecraftApplet.init(SourceFile:36) at net.minecraft.Launcher.replace(Launcher.java:136) at net.minecraft.Launcher$1.run(Launcher.java:79)

    Read the article

  • Game Center alternatives for non-iOS development

    - by Eat at Joes
    I have completed a game for iOS which integrates GameKit. I am happy with Game Center however my game also has an HTML5 web version and will soo have an Android version. My question is what alternatives do I have for non-iOS platforms but primarily for Android and to a lesser extent a Javascript/Web SDK. I looked at Openfeint a year ago and it seemed to be a good solution back then but am not sure if this is still the case? Note, I have no plans to replace what I already have in my iOS game and I understand the leader boards, users, and achievements won't be shared out of Game Center.

    Read the article

  • Trying to do a batch rename, can't figure out the proper RegEx

    - by trezy
    I'm trying to rename my movie collection. All of the files are currently named using dots instead of spaces, i.e. Men.in.Black.avi. I want to replace all of the dots with spaces which isn't terribly difficult, but I need to preserve the last dot for the file extension, i.e. .avi, .mp4, .ogg, etc. My Googling has provided no solutions. I'm also a Javascript developer and could see some snazzy applications for it. So, any suggestions?

    Read the article

  • 600 visitors per day, 20 backlinks but still not referenced by Google

    - by Tristan
    Hello, i've launch a website on wednesday (9 August) There's already in 4 days 12,000 viewed pages 1,400 visits 20 to 25 backlincks a sitemap.xml (130 URL) english language / french language - url like that "/en/" "/fr/" BUT, i'm still not referrenced by google In the google webmaster i have 0 backlinks 130 in sitemap, 0 URL referrenced on google For a smaller website, i remember that i took me less to appear on google with less visits. My url is: http://www.seek-team.com/en/ for english and juste replace /en/ by /fr/ to access it in french. What's causing this ? Is there an explanation ? Thanks for your help (ps, i've already checked robots.txt)

    Read the article

  • Why use flash to as content headings? Why not?

    - by Itai
    A large number of sites lately which use Flash to replace what would be simple HTML headings (h1, h2, h3). Some of them do it consistently for every header. Why would you do this? What are the pros and cons of this? This seems strange to me and really inefficient, particularly since a page can have dozens of headers. I notice this because I use Flash Block (which temporarily disables Flash), so not everyone may have seen it. Just today I landed here with 4 flash headers. I've seen dozens of such use of Flash this week alone, and I am wondering if this has some benefits.

    Read the article

  • How can a gamepad control THE mouse?

    - by Boris
    There are many questions about this subject: Remapping both mouse and keyboard to a gamepad How do I configure a joystick or gamepad? How to control the mouse pointer via my keyboard? ... But the purpose of these questions/answers is to be able to use the gamepad for playing a game. I would a like a solution to use the gamepad to control THE mouse. To replace the mouse by the gamepad in all applications. That way I could control my computer in the living-room from my couch with a wireless gamepad.

    Read the article

  • Question on refactoring and code design

    - by Software Engeneering Learner
    Suppose, I have a class with a constant static final field. Then I want in certain situations that field to be different. It still can be final, because it should be initialized in constructor. My question is, what strategy I should use: add this field value into the constructor create 2 subclasses, replace original field usage with some protected method and override it in subclasses Or create some composite class that will held instance of my class inside and somehow change that value? Which approach should I use and why?

    Read the article

  • ACORD LOMA Session Highlights Policy Administration Trends

    - by [email protected]
    Helen Pitts, senior product marketing manager for Oracle Insurance, attended and is blogging from the ACORD LOMA Insurance Forum this week. Above: Paul Vancheri, Chief Information Officer, Fidelity Investments Life Insurance Company. Vancheri gave a presentation during the ACORD LOMA Insurance Systems Forum about the key elements of modern policy administration systems and how insurers can mitigate risk during legacy system migrations to safely introduce new technologies. When I had a few particularly challenging honors courses in college my father, a long-time technology industry veteran, used to say, "If you don't know how to do something go ask the experts. Find someone who has been there and done that, don't be afraid to ask the tough questions, and apply and build upon what you learn." (Actually he still offers this same advice today.) That's probably why my favorite sessions at industry events, like the ACORD LOMA Insurance Forum this week, are those that include insight on industry trends and case studies from carriers who share their experiences and offer best practices based upon their own lessons learned. I had the opportunity to attend a particularly insightful session Wednesday as Craig Weber, senior vice president of Celent's Insurance practice, and Paul Vancheri, CIO of Fidelity Life Investments, presented, "Managing the Dynamic Insurance Landscape: Enabling Growth and Profitability with a Modern Policy Administration System." Policy Administration Trends Growing the business is the top issue when it comes to IT among both life and annuity and property and casualty carriers according to Weber. To drive growth and capture market share from competitors, carriers are looking to modernize their core insurance systems, with 65 percent of those CIOs participating in recent Celent research citing plans to replace their policy administration systems. Weber noted that there has been continued focus and investment, particularly in the last three years, by software and technology vendors to offer modern, rules-based, configurable policy administration solutions. He added that these solutions are continuing to evolve with the ongoing aim of helping carriers rapidly meet shifting business needs--whether it is to launch new products to market faster than the competition, adapt existing products to meet shifting consumer and /or regulatory demands, or to exit unprofitable markets. He closed by noting the top four trends for policy administration either in the process of being adopted today or on the not-so-distant horizon for the future: Underwriting and service desktops New business automation Convergence of ultra-configurable and domain content-rich systems Better usability and screen design Mitigating the Risk When Making the Decision to Modernize Third-party analyst research from advisory firms like Celent was a key part of the due diligence process for Fidelity as it sought a replacement for its legacy policy administration system back in 2005, according to Vancheri. The company's business opportunities were outrunning system capability. Its legacy system had not been upgraded in several years and was deficient from a functionality and currency standpoint. This was constraining the carrier's ability to rapidly configure and bring new and complex products to market. The company sought a new, modern policy administration system, one that would enable it to keep pace with rapid and often unexpected industry changes and ahead of the competition. A cross-functional team that included representatives from finance, actuarial, operations, client services and IT conducted an extensive selection process. This process included deep documentation review, pilot evaluations, demonstrations of required functionality and complex problem-solving, infrastructure integration capability, and the ability to meet the company's desired cost model. The company ultimately selected an adaptive policy administration system that met its requirements to: Deliver ease of use - eliminating paper and rework, while easing the burden on representatives to sell and service annuities Provide customer parity - offering Web-based capabilities in alignment with the company's focus on delivering a consistent customer experience across its business Deliver scalability, efficiency - enabling automation, while simplifying and standardizing systems across its technology stack Offer desired functionality - supporting Fidelity's product configuration / rules management philosophy, focus on customer service and technology upgrade requirements Meet cost requirements - including implementation, professional services and licenses fees and ongoing maintenance Deliver upon business requirements - enabling the ability to drive time to market for new products and flexibility to make changes Best Practices for Addressing Implementation Challenges Based upon lessons learned during the company's implementation, Vancheri advised carriers to evaluate staffing capabilities and cultural impacts, review business requirements to avoid rebuilding legacy processes, factor in dependent systems, and review policies and practices to secure customer data. His formula for success: upfront planning + clear requirements = precision execution. Achieving a Return on Investment Vancheri said the decision to replace their legacy policy administration system and deploy a modern, rules-based system--before the economic downturn occurred--has been integral in helping the company adapt to shifting market conditions, while enabling growth in its direct channel sales of variable annuities. Since deploying its new policy admin system, the company has reduced its average time to market for new products from 12-15 months to 4.5 months. The company has since migrated its other products to the new system and retired its legacy system, significantly decreasing its overall product development cycle. From a processing standpoint Vancheri noted the company has achieved gains in automation, information, and ease of use, resulting in improved real-time data edits, controls for better quality, and tax handling capability. Plus, with by having only one platform to manage, the company has simplified its IT environment and is well positioned to deliver system enhancements for greater efficiencies. Commitment to Continuing the Investment In the short and longer term future Vancheri said the company plans to enhance business functionality to support money movement, wire automation, divorce processing on payout contracts and cost-based tracking improvements. It also plans to continue system upgrades to remain current as well as focus on further reducing cycle time, driving down maintenance costs, and integrating with other products. Helen Pitts is senior product marketing manager for Oracle Insurance focused on life/annuities and enterprise document automation.

    Read the article

  • Rabbitvcs Nautilus not working on ubuntu 13.04

    - by LX7
    I just installed Rabbitvcs on ubuntu 13.04 as per the official instructions. When i tried to install apt-get install rabbitvcs-nautilus3, i got the following error message : Reading package lists... Done Building dependency tree Reading state information... Done Package rabbitvcs-nautilus3 is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source However the following packages replace it: rabbitvcs-nautilus E: Package 'rabbitvcs-nautilus3' has no installation candidate So i installed it with apt-get install rabbitvcs-nautilus, and now rabbitvcs options are not showing when i right click on a folder. Any help and suggestions will be highly appreciable. Thanks.

    Read the article

  • Patch an Existing NK.BIN

    - by Kate Moss' Open Space
    As you know, we can use MAKEIMG.EXE tool to create OS Image file, NK.BIN, or ROMIMAGE.EXE with a BIB for more accurate. But what if the image file is already created but need to be patched or you want to extract a file from NK.BIN? The Platform Builder provide many useful command line utilities, and today I am going to introduce one, BINMOD.EXE. http://msdn.microsoft.com/en-us/library/ee504622.aspx is the official page for BINMOD tool. As the page says, The BinMod Tool (binmod.exe) extracts files from a run-time image, and replaces files in a run-time image and its usage binmod [-i imagename] [-r replacement_filename.ext | -e extraction_filename.ext] This is a simple tool and is easy to use, if we want to extract a file from nk.bin, just type binmod –i nk.bin –e filename.ext And that's it! Or use can try -r command to replace a file inside NK.BIN. The small tool is good but there is a limitation; due to the files in MODULES section are fixed up during ROMIMAGE so the original file format is not preserved, therefore extract or replace file in MODULE section will be impossible. So just like this small tool, this post supposed to be end here, right? Nah... It is not that easy. Just try the above example, and you will find, the tool is not work! Double check the file is in FILES section and the NK.BIN is good, but it just quits. Before you throw away this useless toy, we can try to fix it! Yes, the source of this tool is available in your CE6, private\winceos\COREOS\nk\tools\romimage\binmod. As it is a tool run in your Windows so you need to Windows SDK or Visual Studio to build the code. (I am going to save you some time by skipping the detail as building a desktop console mode program is fairly trivial) The cbinmod.cpp is the core logic for this program and follow up the error message we got, it looks like the following code is suspected.   //   // Extra sanity check...   //   if((DWORD)(HIWORD(pTOCLoc->dllfirst) << 16) <= pTOCLoc->dlllast &&       (DWORD)(LOWORD(pTOCLoc->dllfirst) << 16) <= pTOCLoc->dlllast)   {     dprintf("Found pTOC  = 0x%08x\n", (DWORD)dwpTOC);     fFoundIt = true;     break;   }    else    {     dprintf("NOTICE! Record %d looked like a TOC except DLL first = 0x%08X, and DLL last = 0x%08X\r\n", i, pTOCLoc->dllfirst, pTOCLoc->dlllast);   } The logic checks if dllfirst <= dlllast but look closer, the code only separated the high/low WORD from dllfirst but does not apply the same to dlllast, is that on purpose or a bug? While the TOC is created by ROMIMAGE.EXE, so let's move to ROMIMAGE. In private\winceos\coreos\nk\tools\romimage\romimage\bin.cpp    Module::s_romhdr.dllfirst  = (HIWORD(xip_mem->dll_data_bottom) << 16) | HIWORD(xip_mem->kernel_dll_bottom);   Module::s_romhdr.dlllast   = (HIWORD(xip_mem->dll_data_top) << 16)    | HIWORD(xip_mem->kernel_dll_top); It is clear now, the high word of dll first is the upper 16 bits of XIP DLL bottom and the low word is the upper 16 bits of kernel dll bottom; also, the high word of dll last is the upper 16 bits of XIP DLL top and the low word is the upper 16 bits of kernel dll top. Obviously, the correct statement should be if((DWORD)(HIWORD(pTOCLoc->dllfirst) << 16) <= (DWORD)(HIWORD(pTOCLoc->dlllast) << 16) &&    (DWORD)(LOWORD(pTOCLoc->dllfirst) << 16) <= (DWORD)(LOWORD(pTOCLoc->dlllast) << 16)) So update the code like this should fix this issue or just like the comment, it is an extra sanity check, you can just get rid of it, either way can make the code moving forward and everything worked as advertised.  "Extracting out copies of files from the nk.bin... replacing files... etc." Since the NK.BIN can be compressed, so the BinMod needs the compress.dll to decompress the data, the DLL can be found in C:\program files\microsoft platform builder\6.00\cepb\idevs\imgutils.

    Read the article

  • Issues when upgrading gnome-session

    - by Gabriel A. Zorrilla
    I'm having issues since yesterday after an upgrade. GNOME session got messed up somehow and now i cannot install further upgrades nor new apps. Here is what i got from terminal after doing the recommended apt-get install -f; (Reading database ... 166876 files and directories currently installed.) Preparing to replace gnome-session 3.0.1-0ubuntu1~build2 (using .../gnome-session_3.0.2-0ubuntu3~natty1_all.deb) ... Unpacking replacement gnome-session ... dpkg: error processing /var/cache/apt/archives/gnome-session_3.0.2-0ubuntu3~natty1_all.deb (--unpack): trying to overwrite '/usr/share/xsessions/gnome-shell.desktop', which is also in package gnome-shell 3.0.1-0ubuntu1~build1 Errors were encountered while processing: /var/cache/apt/archives/gnome-session_3.0.2-0ubuntu3~natty1_all.deb E: Sub-process /usr/bin/dpkg returned an error code (1) In var/log/apt/history.log, it says: Start-Date: 2011-05-29 17:56:16 Commandline: apt-get -f install Upgrade: gnome-session:amd64 (3.0.1-0ubuntu1~build2, 3.0.2-0ubuntu3~natty1) Error: Sub-process /usr/bin/dpkg returned an error code (1) End-Date: 2011-05-29 17:56:21 No idea what all this means.

    Read the article

  • Not able to add html tags through jquery in django [closed]

    - by user1665581
    I am trying to add html tags dynamically through jquery in django. $("#div1").append("<h3> Hey !! </h3>"); $("#div1").append("<br/>"); But they are not working. However normal text is getting appended properly like $("#div1").append("Hey i am here"); I even noticed that some of the tags wern't working outside script like <br> so i had to replace it with <br/> also had to apply closing tag for input and also &nbsp is not working. what is wrong???

    Read the article

< Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >