Search Results

Search found 9744 results on 390 pages for 'k means'.

Page 44/390 | < Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >

  • Cloud Adoption Challenges

    - by Herve Roggero
    Originally posted on: http://geekswithblogs.net/hroggero/archive/2013/11/07/cloud-adoption-challenges.aspxWhile cloud computing makes sense for most organizations and countless projects, I have seen customers significantly struggle with cloud adoption challenges. This blog post is not an attempt to provide a generic assessment of cloud adoption; rather it is an account of personal experiences in the field, some of which may or may not apply to your organization. Cloud First, Burst? In the rush to cloud adoption some companies have made the decision to redesign their core system with a cloud first approach. However a cloud first approach means that the system may not work anymore on-premises after it has been redesigned, specifically if the system depends on Platform as a Service (PaaS) components (such as Azure Tables). While PaaS makes sense when your company is in a position to adopt the cloud exclusively, it can be difficult to leverage with systems that need to work in different clouds or on-premises. As a result, some companies are starting to rethink their cloud strategy by designing for on-premises first, and modify only the necessary components to burst when needed in the cloud. This generally means that the components need to work equally well in any environment, which requires leveraging Infrastructure as a Service (IaaS) or additional investments for PaaS applications, or both.  What’s the Problem? Although most companies can benefit from cloud computing, not all of them can clearly identify a business reason for doing so other than in very generic terms. I heard many companies claim “it’s cheaper”, or “it allows us to scale”, without any specific metric or clear strategy behind the adoption decision. Other companies have a very clear strategy behind cloud adoption and can precisely articulate business benefits, such as “we have a 500% increase in traffic twice a year, so we need to burst in the cloud to avoid doubling our network and server capacity”. Understanding the problem being solved through by adopting cloud computing can significantly help organizations determine the optimum path and timeline to adoption. Performance or Scalability? I stopped counting the number of times I heard “the cloud doesn’t scale; our database runs faster on a laptop”.  While performance and scalability are related concepts, they are nonetheless different in nature. Performance is a measure of response time under a given load (meaning with a specific number of users), while scalability is the performance curve over various loads. For example one system could see great performance with 100 users, but timeout with 1,000 users, in which case the system wouldn’t scale. However another system could have average performance with 100 users, but display the exact same performance with 1,000,000 users, in which case the system would scale. Understanding that cloud computing does not usually provide high performance, but instead provides the tools necessary to build a scalable system (usually using PaaS services such as queuing and data federation), is fundamental to proper cloud adoption. Uptime? Last but not least, you may want to read the Service Level Agreement of your cloud provider in detail if you haven’t done so. If you are expecting 99.99% uptime annually you may be in for a surprise. Depending on the component being used, there may be no associated SLA at all! Other components may be restarted at any time, or services may experience failover conditions weekly ( or more) based on current overall conditions of the cloud service provider, most of which are outside of your control. As a result, for PaaS cloud environments (and to a certain extent some IaaS systems), applications need to assume failure and gracefully retry to be successful in the cloud in order to provide service continuity to end users. About Herve Roggero Herve Roggero, Windows Azure MVP, is the founder of Blue Syntax Consulting (http://www.bluesyntax.net). Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" and “PRO SQL Server 2012 Practices” from Apress, a PluralSight author, and runs the Azure Florida Association.

    Read the article

  • Obfuscation is not a panacea

    - by simonc
    So, you want to obfuscate your .NET application. My question to you is: Why? What are your aims when your obfuscate your application? To protect your IP & algorithms? Prevent crackers from breaking your licensing? Your boss says you need to? To give you a warm fuzzy feeling inside? Obfuscating code correctly can be tricky, it can break your app if applied incorrectly, it can cause problems down the line. Let me be clear - there are some very good reasons why you would want to obfuscate your .NET application. However, you shouldn't be obfuscating for the sake of obfuscating. Security through Obfuscation? Once your application has been installed on a user’s computer, you no longer control it. If they do not want to pay for your application, then nothing can stop them from cracking it, even if the time cost to them is much greater than the cost of actually paying for it. Some people will not pay for software, even if it takes them a month to crack a $30 app. And once it is cracked, there is nothing stopping them from putting the result up on the internet. There should be nothing suprising about this; there is no software protection available for general-purpose computers that cannot be cracked by a sufficiently determined attacker. Only by completely controlling the entire stack – software, hardware, and the internet connection, can you have even a chance to be uncrackable. And even then, someone somewhere will still have a go, and probably succeed. Even high-end cryptoprocessors have known vulnerabilities that can be exploited by someone with a scanning electron microscope and lots of free time. So, then, why use obfuscation? Well, the primary reason is to protect your IP. What obfuscation is very good at is hiding the overall structure of your program, so that it’s very hard to figure out what exactly the code is doing at any one time, what context it is running in, and how it fits in with the rest of the application; all of which you need to do to understand how the application operates. This is completely different to cracking an application, where you simply have to find a single toggle that determines whether the application is licensed or not, and flip it without the rest of the application noticing. However, again, there are limitations. An obfuscated application still has to run in the same way, and do the same thing, as the original unobfuscated application. This means that some of the protections applied to the obfuscated assembly have to be undone at runtime, else it would not run on the CLR and do the same thing. And, again, since we don’t control the environment the application is run on, there is nothing stopping a user from undoing those protections manually, and reversing some of the obfuscation. It’s a perpetual arms race, and it always will be. We have plenty of ideas lined about new protections, and the new protections added in SA 6.6 (method parent obfuscation and a new control flow obfuscation level) are specifically designed to be harder to reverse and reconstruct the original structure. So then, by all means, obfuscate your application if you want to protect the algorithms and what the application does. That’s what SmartAssembly is designed to do. But make sure you are clear what a .NET obfuscator can and cannot protect you against, and don’t expect your obfuscated application to be uncrackable. Someone, somewhere, will crack your application if they want to and they don’t have anything better to do with their time. The best we can do is dissuade the casual crackers and make it much more difficult for the serious ones. Cross posted from Simple Talk.

    Read the article

  • Come up with a real-world problem in which only the best solution will do (a problem from Introduction to algorithms) [closed]

    - by Mike
    EDITED (I realized that the question certainly needs a context) The problem 1.1-5 in the book of Thomas Cormen et al Introduction to algorithms is: "Come up with a real-world problem in which only the best solution will do. Then come up with one in which a solution that is “approximately” the best is good enough." I'm interested in its first statement. And (from my understanding) it is asked to name a real-world problem where only the exact solution will work as opposed to a real-world problem where good-enough solution will be ok. So what is the difference between the exact and good enough solution. Consider some physics problem for example the simulation of the fulid flow in the permeable medium. To make this simulation happen some simplyfing assumptions have to be made when deriving a mathematical model. Otherwise the model becomes at least complex and unsolvable. Virtually any particle in the universe has its influence on the fluid flow. But not all particles are equal. Those that form the permeable medium are much more influental than the ones located light years away. Then when the mathematical model needs to be solved an exact solution can rarely be found unless the mathematical model is simple enough (wich probably means the model isn't close to reality). We take an approximate numerical method and after hours of coding and days of verification come up with the program or algorithm which is a solution. And if the model and an algorithm give results close to a real problem by some degree that is good enough soultion. Its worth noting the difference between exact solution algorithm and exact computation result. When considering real-world problems and real-world computation machines I believe all physical problems solutions where any calculations are taken can not be exact because universal physical constants are represented approximately in the computer. Any numbers are represented with the limited precision, at least limited by amount of memory available to computing machine. I can imagine plenty of problems where good-enough, good to some degree solution will work, like train scheduling, automated trading, satellite orbit calculation, health care expert systems. In that cases exact solutions can't be derived due to constraints on computation time, limitations in computer memory or due to the nature of problems. I googled this question and like what this guy suggests: there're kinds of mathematical problems that need exact solutions (little note here: because the question is taken from the book "Introduction to algorithms" the term "solution" means an algorithm or a program, which in this case gives exact answer on each input). But that's probably more of theoretical interest. So I would like to narrow down the question to: What are the real-world practical problems where only the best (exact) solution algorithm or program will do (but not the good-enough solution)? There are problems like breaking of cryptographic ciphers where only exact solution matters in practice and again in practice the process of deciphering without knowing a secret should take reasonable amount of time. Returning to the original question this is the problem where good-enough (fast-enough) solution will do there's no practical need in instant crack though it's desired. So the quality of "best" can be understood in any sense: exact, fastest, requiring least memory, having minimal possible network traffic etc. And still I want this question to be theoretical if possible. In a sense that there may be example of computer X that has limited resource R of amount Y where the best solution to problem P is the one that takes not more than available Y for inputs of size N*Y. But that's the problem of finding solution for P on computer X which is... well, good enough. My final thought that we live in a world where it is required from programming solutions to practical purposes to be good enough. In rare cases really very very good but still not the best ones. Isn't it? :) If it's not can you provide an example? Or can you name any such unsolved problem of practical interest?

    Read the article

  • Multichannel Digital Engagement: Find Out How Your Organization Measures Up

    - by Michael Snow
    This article was originally published in the September 2013 Edition of the Oracle Information InDepth Newsletter ORACLE WEBCENTER EDITION Thanks to mobile and social technologies, interactive online experiences are now commonplace. Not only that, they give consumers more choices, influence, and control than ever before. So how can you make your organization stand out? The key building blocks for delivering exceptional cross-channel digital experiences are outlined below. Also, a new assessment tool is available to help you measure your organization's ability to deliver such experiences. A clearly defined digital strategy. The customer journey is growing increasingly complex, encompassing multiple touchpoints and channels. It used to be easy to map marketing efforts to specific offline channels; for example, a direct mail piece with an offer to visit a store for a discounted purchase. Now it is more difficult to cultivate and track such clear cause-and-effect relationships. To deliver an integrated digital experience in this more complex world, organizations need a clearly defined and comprehensive digital marketing strategy that is backed up by an integrated set of software, middleware, and hardware solutions. Strong support for business agility and speed-to-market. As both IT and marketing executives know, speed-to-market and business agility are key to competitive advantage. That means marketers need solutions to support the rapid implementation of online marketing initiatives—plus the flexibility to adapt quickly to a changing marketplace. And IT needs tools with the performance, scalability, and ease of integration to support marketing efforts. Both teams benefit when business users are empowered to implement marketing initiatives on their own, with minimal IT intervention. The ability to deliver relevant, personalized content. Delivering a one-size-fits-all online customer experience is no longer acceptable. Customers expect you to know who they are, including their preferences and past relationship with your brand. That means delivering the most relevant content from the moment a visitor enters your site. To make that happen, you need a powerful rules engine so that marketers and business users can easily define site visitor segments and deliver content accordingly. That includes both implicit targeting that is based on the user’s behavior, and explicit targeting that takes a user’s profile information into account. Ideally, the rules engine can also intelligently weight recommendations when multiple segments apply to a specific customer. Support for social interactivity. With the advent of Facebook and LinkedIn, visitors expect to participate in and contribute to your web presence—and share their experience on their own social networks. That requires easy incorporation of user-generated content such as comments, ratings, reviews, polls, and blogs; seamless integration with third-party social networking sites; and support for social login, which helps to remove barriers to social participation. The ability to deliver connected, multichannel experiences that include powerful, flexible mobile capabilities. By 2015, mobile usage is projected to surpass that of PCs and other wired devices. In other words, mobile is an essential element in delivering exceptional online customer experiences. This requires the creation and management of mobile experiences that are optimized for delivery to the thousands of different devices that are in use today. Just as important, organizations must be able to easily extend their traditional web presence to the mobile channel and deliver highly personalized and relevant multichannel marketing initiatives while also managing to minimize the time and effort required to manage mobile sites. Are you curious to know how your organization measures up when it comes to delivering an engaging, multichannel digital experience? If so, take this brief, 15-question online assessment and see how your organization scores in the areas of digital strategy, digital agility, relevance and personalization, social interactivity, and multichannel experience.

    Read the article

  • How to use Android's CacheManager?

    - by punnie
    I'm currently developing an Android application that fetches images using http requests. It would be quite swell if I could cache those images in order to improve to performance and bandwidth use. I came across the CacheManager class in the Android reference, but I don't really know how to use it, or what it really does. I already scoped through this example, but I need some help understanding it: /core/java/android/webkit/gears/ApacheHttpRequestAndroid.java Also, the reference states: "Network requests are provided to this component and if they can not be resolved by the cache, the HTTP headers are attached, as appropriate, to the request for revalidation of content." I'm not sure what this means or how it would work for me, since CacheManager's getCacheFile accepts only a String URL and a Map containing the headers. Not sure what the attachment mentioned means. An explanation or a simple code example would really do my day. Thanks! Update Here's what I have right now. I am clearly doing it wrong, just don't know where. public static Bitmap getRemoteImage(String imageUrl) { URL aURL = null; URLConnection conn = null; Bitmap bmp = null; CacheResult cache_result = CacheManager.getCacheFile(imageUrl, new HashMap()); if (cache_result == null) { try { aURL = new URL(imageUrl); conn = aURL.openConnection(); conn.connect(); InputStream is = conn.getInputStream(); cache_result = new CacheManager.CacheResult(); copyStream(is, cache_result.getOutputStream()); CacheManager.saveCacheFile(imageUrl, cache_result); } catch (Exception e) { return null; } } bmp = BitmapFactory.decodeStream(cache_result.getInputStream()); return bmp; }

    Read the article

  • Ruby - Feedzirra and updates

    - by mplacona
    Hi, trying to get my head around Feedzirra here. I have it all setup and everything, and can even get results and updates, but something odd is going on. I came up with the following code: def initialize(feed_url) @feed_url = feed_url @rssObject = Feedzirra::Feed.fetch_and_parse(@feed_url) end def update_from_feed_continuously() @rssObject = Feedzirra::Feed.update(@rssObject) if @rssObject.updated? puts @rssObject.new_entries.count else puts "nil" end end Right, what I'm doing above, is starting with the big feed, and then only getting updates. I'm sure I must be doing something stupid, as even though I'm able to get the updates, and store them on the same instance variable, after the first time, I'm never able to get those again. Obviously this happens because I'm overwriting my instance variable with only updates, and lose the full feed object. I then thought about changing my code to this: def update_from_feed_continuously() feed = Feedzirra::Feed.update(@rssObject) if feed.updated? puts feed.new_entries.count else puts "nil" end end Well, I'm not overwriting anything and that should be the way to go right? WRONG, this means I'm doomed to always try to get updates to the same static feed object, as although I get the updates on a variable, I'm never actually updating my "static feed object", and newly added items will be appended to my "feed.new_entries" as they in theory are new. I'm sure I;m missing a step here, but I'd really appreciate if someone could shed me a light on it. I've been going through this code for hours, and can't get to grips with it. Obviously it should work fine, if I did something like: if feed.updated? puts feed.new_entries.count @rssObject = initialize(@feed_url) else Because that would reinitialize my instance variable with a brand new feed object, and the updates would come again. But that also means that any new update added on that exact moment would be lost, as well as massive overkill, as I'd have to load the thing again. Thanks in advance!

    Read the article

  • Why is EXC_BAD_ACCESS so unhelpful?

    - by Dustin
    First let me say I come from a background in Flash/AS3, which I realize is not as strict about most things as iPhone/Objective-C. I suspect my question actually applies to AS3 as well, but let me ask it as pertaining to Obj-c. Why is the error EXC_BAD_ACCESS, and others like it, so unhelpful? I realize that it normally means mismanagement of memory somewhere, but why can't it tell you more about the problem. For instance why doesn't it say "EXC_BAD_ACCESS, you tried to pass pointer suchAndSuch on line 123, however you're an idiot, because you released it on line 69 so it's not available anymore"? I realize I can use the debugger to get more clues about where my error occurred, but many times this is only marginally helpful. For instance sometimes none of the messages in the stack/thread/whatever are even my code. Other times it is my code but on the top of the stack will be a message that has 4+ parameters, ok thanks debugger you narrowed it down to 4 possible pointers by why can't you just tell me which one!? I'm guessing there's just some fundamental explanation that I missed because of the background I came from, not needing to worry about memory and such. Although there is an error that can happen a lot in AS3 development that is equally mysterious and along the same lines. "Error #1009: Cannot access a property or method of a null object reference" which almost always means a variable you were expecting to be holding something is actually null. Why doesn't it tell me WHICH variable?!

    Read the article

  • Version control a content management system?

    - by Mike
    I have the following directory structure in the CMS application we have written: /application /modules /cms /filemanager /block /pages /sitemap /youtube /rss /skin /backend /default /css /js /images /frontend /default /css /js /images Application contains code specific to the current CMS implementation, i.e code for this specific cms. Modules contain reusable portions of code that we share across projects, such as libraries to work with youtube or rss feeds. We include these as git submodules, so that we can update the module in any website and push the changes back across all other projects. It makes it really easy to apply a change to our code and distribute it. We wanted to turn the CMS into a module so we get the same benefit - we can run the entire project under source control, then update the cms as required through a git-submodule. We have run into a problem however: the cms requires javascript/images/css in order for it to work correctly. Things we have thought about: We could create 2 submodules, one for cms-skin and one for cms, but this means you cannot "git pull" one version without having some idea of which versions of skin work with which versions of cms. i.e version 1.2.2 CMS might have issues with 1.0.3 CMS-Skin We could add the skin to the cms module but this has the following problems: Skin should be available on the document root, module code shouldn't be, and if it is it should probably be secured via .htaccess It doesn't seem to make any sense bundling assets with php code We could create a symlink between /skin/backend/ to go to /modules/cms/skin but does this cause any security problems, and do we want to require something like a symlink for the application to work? We could create a hook for git or a shell script that copies files from modules/cms/skin to skin/backend when an update occurs, but this means we lose the ability to edit CMS core files in a project then push them back How is this typically done in large scale cms's? How is it possible to get the source code for a cms under version control, work on the application for a client, then update the sourcecode as releases and given by the vendor? How do applications like Magento or Drupal do this?

    Read the article

  • Permission denied for cvs server via ssh

    - by NovumCoder
    I cant create a new project by importing a java project via eclipse onto my cvs server via internet. I created a directory as root called /priv/cvs/. Then i called "cvs -d /priv/cvs/ init". I created a user named cvs and a groups called cvs. The repository is owned by cvs and in group cvs. Then i created a user "ben" and his only group is cvs. I "chrooted" the user "ben" accessing only the cvs functionality by not allowing to access the server via ssh with password, only by using a public key which is added in his home directory on the server in file authorized_keys2. the contect of authorized_keys2 is as follows: no-port-forwarding,no-X11-forwarding,command="/usr/bin/cvs server" ssh-rsa [public_key_content] rsa-key Connecting to the server works pretty fine. Eclipse asks for the passphrase for the private key to connect to the server. Authentication works and eclipse is able to run cvs commands. But when importing my project by using Team-Share Project. I get the error: The server reported an error: Permission denied projectname: cvs server: cannot open /priv/cvs/CVSROOT/config: Permission denied projectname: Cannot access /priv/cvs/CVSROOT The access right for the cvs root (/priv/cvs/) is set to 770. Which means that the owner, which is cvs and the group participants of the group cvs are allowed to read and write. Why do i get Permission denied? When i set the folder to 777, which means read/write to ALL, then it works. But i dont want that. I only want cvs users read/write to this folder? Is there something i misunderstood about access rules?

    Read the article

  • Differing paths for lua script and app

    - by Person
    My problem is that I'm having trouble specifying paths for Lua to look in. For example, in my script I have a require("someScript") line that works perfectly (it is able to use functions from someScript when the script is run standalone. However, when I run my app, the script fails. I believe this is because Lua is looking in a location relative to the application rather than relative to the script. Hardcoding the entire path down to the drive isn't an option since people can download the game wherever they like so the highest I can go is the root folder for the game. We have XML files to load in information on objects. In them, when we specify the script the object uses, we only have to do something like Content\Core\Scripts\someScript.lua where Content is in the same directory as Debug and the app is located inside Debug. If I try putting that (the Content\Core...) in Lua's package.path I get errors when I try to run the script standalone. I'm really stuck, and am not sure how to solve this. Any help is appreciated. Thanks. P.S. When I print out the default package.path in the app I see syntax like ;.\?.lua in a sequence like... ;.\?.lua;c:...(long file path)\Debug\?.lua; I assume the ; means the end of the path, but I have no idea what the .\?.lua means. Any Lua file in the directory?

    Read the article

  • Getting past dates in HP-UX with ksh

    - by Alejandro Atienza Ramos
    Ok, so I need to translate a script from a nice linux & bash configuration to ksh in hp-ux. Each and every command expects a different syntax and i want to kill myself. But let's skip the rant. This is part of my script anterior=`date +"%Y%0m" -d '1 month ago'` I basically need to get a past date in format 201002. Never mind the thing that, in the new environment, %0m means "no zeroes", while actually in the other one it means "yes, please put that zero on my string". It doesn't even accept the "1 month ago". I've read the man date for HP-UX and it seems you just can't do date arithmetic with it. I've been looking around for a while but all i find are lengthy solutions. I can't quite understand that such a typical administrative task like adding dates needs so much fuss. Isn't there a way to convert my one-liner to, well, i don't know, another one? Come on, i've seen proposed solutions that used bc, had thirty plus lines and magic number all over the script. The simplest solutions seem to use perl... but i don't know how to modify them, as they're quite arcane. Thanks!

    Read the article

  • Pixel Perfect Collision Detection in HTML5 Canvas

    - by Armin Ronacher
    Hi, I want to check a collision between two Sprites in HTML5 canvas. So for the sake of the discussion, let's assume that both sprites are IMG objects and a collision means that the alpha channel is not 0. Now both of these sprites can have a rotation around the object's center but no other transformation in case this makes this any easier. Now the obvious solution I came up with would be this: calculate the transformation matrix for both figure out a rough estimation of the area where the code should test (like offset of both + calculated extra space for the rotation) for all the pixels in the intersecting rectangle, transform the coordinate and test the image at the calculated position (rounded to nearest neighbor) for the alpha channel. Then abort on first hit. The problem I see with that is that a) there are no matrix classes in JavaScript which means I have to do that in JavaScript which could be quite slow, I have to test for collisions every frame which makes this pretty expensive. Furthermore I have to replicate something I already have to do on drawing (or what canvas does for me, setting up the matrices). I wonder if I'm missing anything here and if there is an easier solution for collision detection.

    Read the article

  • JEE6 Global JNDI Name and Maven Deployment

    - by wobblycogs
    I'm having some problems with the global JNDI names of my EJB resources which is (or at least will) cause my JNDI look ups to fail. The project is being developed on Netbeans and is a standard Maven Web Application. When my application is deployed to GF3.0 the application name is set to something like: com.example_myapp_war_1.0-SNAPSHOT which is all well and good from Netbeans point of view because it ensures the name is unique but it also means all the EJBs get global names such as this: java:global/com.example_myapp_war_1.0-SNAPSHOT/CustomerService This, of course, is going to cause problems because every time the version changes all the global names change (I've tested this by changing the version and the names indeed changed). The name is being generated from the POM file and it's a concatenation of: <groupId>com.example</groupId> <artifactId>myapp</artifactId> <packaging>war</packaging> <version>1.0-SNAPSHOT</version> Up until now I've got away with just injecting all the resources using @EJB but now I need to access the CustomerService EJB from a JSF Converter so I'm doing a JNDI look up like this: try { Context ctx = new InitialContext(); CustomerService customerService = (CustomerService)ctx.lookup( "java:global/com.example_myapp_war_1.0-SNAPSHOT/CustomerService" ); return customerService.get(submittedValue); } catch( Exception e ) { logger.error( "Failed to convert customer.", e ); return null; } which will clearly break when the application is properly released and the module name changes. So, the million dollar question: how can I set the modle name in maven or how do I recover the module name so that I can programatically build the JNDI name at runtile. I've tried setting it in the web.xml file as suggested by that link but it was ignored. I think I'd rather build the name at runtime as that means there is less scope for screw ups when the application is deployed. Many thanks for any help, I've been tearing my hair out all day on this.

    Read the article

  • Best way for a remote web app to authenticate users in my current web app?

    - by jklp
    So a bit of background, I'm working on an existing web application which has a set of users, who are able to log in via a traditional login screen with a user name and password, etc. Recently we've managed to score a client (who have their own Intranet site), who are wanting to be able to have their users log into their Intranet site, and then have their users click a link on their Intranet which redirects to our application and logs them into it automatically. I've had two suggestions on how to implement this so far: Create a URL which takes 2 parameters (which are "username" and "password") and have the Intranet site pass those parameters to us (our connection is via TLS so it's all encrypted). This would work fine, but it seems a little "hacky", and also means that the logins and passwords have to be the same on both systems (and having to write some kind of web service which can update the passwords for users - which also seems a bit insecure) Provide a token to the Intranet, so when the client clicks on a link on the Intranet, it sends the token to us, along with the user name (and no password) which means they're authenticated. Again, this sounds a bit hacky as isn't that essentially the same as providing everyone with the same password to log in? So to summarise, I'm after the following things: A way for the users who are already authenticated on the Intranet to log into our system without too much messing around, and without using an external system to authenticate, i.e. LDAP / Kerberos Something which isn't too specific to this client, and can easily be implemented by other Intranets to log in

    Read the article

  • When does a PHP <5.3.0 daemon script receive signals?

    - by MidnightLightning
    I've got a PHP script in the works that is a job worker; its main task is to check a database table for new jobs, and if there are any, to act on them. But jobs will be coming in in bursts, with long gaps in between, so I devised a sleep cycle like: while(true) { if ($jobs = get_new_jobs()) { // Act upon the jobs } else { // No new jobs now sleep(30); } } Good, but in some cases that means there might be a 30 second lag before a new job is acted upon. Since this is a daemon script, I figured I'd try the pcntl_signal hook to catch a SIGUSR1 signal to nudge the script to wake up, like: $_isAwake = true; function user_sig($signo) { global $_isAwake; daemon_log("Caught SIGUSR1"); $_isAwake = true; } pcntl_signal(SIGUSR1, 'user_sig'); while(true) { if ($jobs = get_new_jobs()) { // Act upon the jobs } else { // No new jobs now daemon_log("No new jobs, sleeping..."); $_isAwake = false; $ts = time(); while(time() < $ts+30) { sleep(1); if ($_isAwake) break; // Did a signal happen while we were sleeping? If so, stop sleeping } $_isAwake = true; } } I broke the sleep(30) up into smaller sleep bits, in case a signal doesn't interrupt a sleep() command, thinking that this would cause at most a one-second delay, but in the log file, I'm seeing that the SIGUSR1 isn't being caught until after the full 30 seconds has passed (and maybe the outer while loop resets). I found the pcntl_signal_dispatch command, but that's only for PHP 5.3 and higher. If I were using that version, I could stick a call to that command before the if ($_isAwake) call, but as it currently stands I'm on 5.2.13. On what sort of situations is the signals queue interpreted in PHP versions without the means to explicitly call the queue parsing? Could I put in some other useless command in that sleep loop that would trigger a signal queue parse within there?

    Read the article

  • help in the Donalds B. Johnson's algorithm, i cannot understand the pseudo code

    - by Pitelk
    Hi , does anyone know the Donald B. Johnson's algorithm which enumarates all the elementary circuits (cycles) in a Directed graph? link text I have the paper he had published in 1975 but I cannot understand the pseudo-code. My goal is to implement this algorithm in java. Some questions i have is for example what is the matrix Ak it refers to. In the pseudo code mentions that Ak:=adjacency structure of strong component K with least vertex in subgraph of G induced by {s,s+1,....n}; Does that mean i have to implement another algorithm that finds the Ak matrix? Another question is what the following means? begin logical f; Does also the line "logical procedure CIRCUIT (integer value v);" means that the circuit procedure returns a logical variable. In the pseudo code also has the line "CIRCUIT := f;" . Does this mean? It would be great if someone could translate this 1970's pseudocode to a more modern type of pseudo code so i can understand it in case you are interested to help but you cannot find the paper please email me at [email protected] and i will send you the paper. Thanks in advance

    Read the article

  • How to debug UrlRewriter.NET?

    - by vfilby
    I have looked at their help page it seems like I can register a debug logger that outputs information to the 'standard ASP.NET debug window'. My problem is I don't know what that means, if it means the debug output window in Visual Studio (where you see build output, debug output and more) I am not seeing any UrlRewriter debug output. The rules are working (mostly) I just want to get more debug output to fix issues. I added the register call to the rewriter section like this: <rewriter> <register logger="Intelligencia.UrlRewriter.Logging.DebugLogger, Intelligencia.UrlRewriter" /> .... </rewriter> I am hosting this website locally in IIS on Vista, to debug it I attach the debugger to the w3wp process. Other selected parts from the web.config" <compilation debug="true"> <assemblies> ... </assemblies> </compilation> <trace enabled="true"/> Where should I see the debug output from UrlRewriter.NET? If it is in the Visual Studio debug output window, any ideas why I am not seeing it there?

    Read the article

  • Android Scoreloop, OpenFeint etc al

    - by theblitz
    I am looking to use one of the social networks in my Android program. Most important for me is the ability to build a continuous leadership board in which players move up and down depending their wins/loses to others. The idea is for players to challenge others head-to-head. The winner gains points and the loser loses points. Equally important, I want this feature to include the possibility to "charge" the player game coins. Scoreloop includes the possibility of challenges but they are there in order to win coins off other players. In other words, they are the means to the end. In my case I need it to be the other way around. The "ends" is to be higher in the leadership board and the "means" are to play others with coins. Scoreloop do have a continuos leadership board but it is not accessible from the program. I tried looking at OpenFeint but their site is a real mess. It is impossible to understand from there exactly what is and isn't available. I signed up and tried to add my program. I ended up adding it four times and cannot delete it!

    Read the article

  • Using SDL2_gfx issues using C++

    - by Lance Zimmerman
    When I use the it with the common.c /common.h files that come with it, if I use the cpp instead of c extension using VS201X I get the LNK2019: unresolved external symbol _SDL_main What that means is if I change the file containing main to test.c it compiles. When I change it back to text.cpp it fails to compile. I think that means it only works as a C compile. Here is the code I copied from SDL2_gfxPrimitives.c. (Spaces added so they would show up.) #include < stdio.h> #include < stdlib.h> #include < string.h> #include < math.h> #include < time.h> #include "common.h" #include "SDL2_gfxPrimitives.h" static CommonState *state; int main(int argc, char* argv[]) { /* Initialize test framework */ state = CommonCreateState(argv, SDL_INIT_VIDEO); return 1; } I need to use the library in C++ but it seems I don't know enough to figure out how. Any help would be appreciated, I've spent two days attempting to figure this out.

    Read the article

  • Limit the number of service calls in a RESTful application

    - by Slavo
    Imagine some kind of a banking application, with a screen to create accounts. Each Account has a Currency and a Bank as a property, Currency being a separate class, as well as Bank. The code might look something like this: public class Account { public Currency Currency { get; set; } public Bank Bank { get; set; } } public class Currency { public string Code { get; set; } public string Name { get; set; } } public class Bank { public string Name { get; set; } public string Country { get; set; } } According to the REST design principles, each resource in the application should have its own service, and each service should have methods that map nicely to the HTTP verbs. So in our case, we have an AccountService, CurrencyService and BankService. In the screen for creating an account, we have some UI to select the bank from a list of banks, and to select a currency from a list of currencies. Imagine it is a web application, and those lists are dropdowns. This means that one dropdown is populated from the CurrencyService and one from the BankService. What this means is that when we open the screen for creating an account, we need to make two service calls to two different services. If that screen is not by itself on a page, there might be more service calls from the same page, impacting the performance. Is this normal in such an application? If not, how can it be avoided? How can the design be modified without going away from REST?

    Read the article

  • PostgreSQL: Auto-partition a table

    - by Adam Matan
    Hi, I have a huge database which holds pairs of numbers (A,B), each ranging from 0 to 10,000 and stored as floats. e.g., (1, 9984.4), (2143.44, 124.243), (0.55, 0), ... Since the PostgreSQL table which stores these pairs grew quite large, I have decided to partition it into inheriting sub-tables. I intend to create 100 such tables, each storing a range of 1000x1000. The problem is that these numbers tend to come in large chunks of nearby numbers. It means that in the future, some tables will be nearly empty and some will hold a very large portion of the database. Unfortunately, the distribution of future pairs is yet unknown. I am looking for a way to automatically repartition my table. That means that if a certain subtable holds more than a specific number of pairs, it will be automatically partitioned into four sub-sub tables, and so on. My questions are: Is recursive partitioning and inheritance possible in PostgreSQL 8.3? Will indexes and query plans understand it? What's the best way to split a subtable once it grew too large? I should point out that this isn't a live database, so a downtime of few hours every week is totally acceptable. Thanks in advance, Adam

    Read the article

  • How to use a viewstate'd object as a datasource for controls on a user control

    - by user557325
    I've got a listview on a control. Each row comprises a checkbox and another listview. The outer listview is bound to a property on the control (via a method call, can't set a property as a SelectMethod on an ObjectDataSource it would appear) which is lazy loaded suchly: Public ReadOnly Property ProductLineChargeDetails() As List(Of WebServiceProductLineChargeDetail) Get If ViewState("WSProductLineChargeDetails") Is Nothing Then ViewState("WSProductLineChargeDetails") = GetWebServiceProductLineChargeDetails() End If Return DirectCast(ViewState("WSProductLineChargeDetails"), Global.System.Collections.Generic.List(Of Global.MI.Open.WebServiceProductLineChargeDetail)) End Get End Property The shape of the object referenced by the data source is something like this: (psuedocode) Product { bool Licenced; List<Charge> charges; } Charge { int property1; string property2; bool property3 . . . } The reason for the use of viewstate is this: When an one of the checkboxes on one of the outer list view rows is checked or unchecked I want to modify the object that the ODS represents (for example I'll add a couple of Charge objects to the relevant Product object) and then rebind. The problem I'm getting is that after every postback (specifically after checking or unchecking one of the rows' checkbox) my viewstate is empty. Thiss means that any changes I make to my viewstate'd object is lost. Now, I've worked out (after much googling and reading, amongst many others, Scott Mitchel's excellent bit on ViewState) that during initial databinding IsTrackingViewState is set to false. That means, I think, that assigning the return from GetWebServiceProductLineChargeDetails() to the ViewState item in my Property Get during the initial databind won't work. Mind you, even when the IsTrackingViewState is true and I call the Property Get, come the next postback, the viewstate is empty. So do you chaps have any ideas on how I keep the object referenced by the ObjectDataSource in ViewState between postbacks and update it and get those changes to stay in ViewState? This has been going on for a couple of days now and I'm getting fed up! Cheers in advance Steve

    Read the article

  • Software Update Notifications

    - by devio
    I am considering implementing some sort of Software Update Notification for one of the web applications I am developing. There are several questions I came across: Should the update check be executed on the client or on the server? Client-side means, the software retrieves the most current version information, performs its checks, and displays the update information. Server-side check means the software sends its version info to the server, which in turn does the calculations and returns information to the client. My guess is that server-side implementation may turn out to be more flexible and more powerful than client-side, as I can add functionality to the server easily, as long as the client understands it. Where should the update info be displayed? Is it ok to display on the login screen? Should only admins see it? (this is a web app with a database, so updating requires manipulation of db and web, which is only done by admins). What about a little beeping flashing icon which increases in size as the version gets more obsolete every day ;) ? Privacy issues Not everybody likes to have their app usage stats broadcast over the internet. TheOnion question: What do you think?

    Read the article

  • Installing and using acts-as-taggable-on

    - by seaneshbaugh
    This is going to be a really dumb question, I just know it, but I'm going to ask anyways because it's driving me crazy. How do I get acts-as-taggable-on to work? I installed it as a gem with gem install acts-as-taggable-on because I can't ever seem to get installing plugins to work, but that's a whole other batch of questions that are all probably really dumb. Anyways, no problems there, it installed correctly. I did ruby script/generate acts_as_taggable_on_migration and rake db:migrate, again no problems. I added acts_as_taggable to the model I want to use tags with, started up the server and then loaded the index for the model just to see if what I've got so far is working and got the following error: undefined local variable or method `acts_as_taggable' for #. I figure that just means I need to do something like require 'acts-as-taggable-on' to my model's file because that's typically what's necessary for gems. So I did that hit refresh and got uninitialized constant ActiveRecord::VERSION. I'm not even going to pretend to begin to know what that means went wrong. Did I go wrong somewhere or there something else I need to do. The installation instructions seem to me like they just assume you generally know what you're doing and don't even begin to explain what to do when things go wrong.

    Read the article

  • Explaining verity index and document search limits

    - by Ahmad
    As present, we currently have a CF8 standard edition server which have some limitations around verity indexing. According to Adobe Verity Server has the following document search limits (limits are for all collections registered to Verity Server): - 10,000 documents for ColdFusion Developer Edition - 125,000 documents for ColdFusion Standard Edition - 250,000 documents for ColdFusion Enterprise Edition We have now reached a stage where the server wide number of documents indexed exceed 125k. However, the largest verity collection consists of about 25k documents(and this is expected to grow). Only one collection is ever searched at a time. In my understanding, this means that I can still search an entire collection with no restrictions. Is this correct? Or does it mean that only documents that were indexed across all collection prior to reaching the limit are actually searchable? We are considering moving to CF9 standard as a solution to this and to use the Solr solution which has no restrictions. The coldfusionjedi highlights some differences between Verity and Solr. However, before we upgrade I am trying to gain a clearer understanding of this before we commit to an upgrade. Can someone provide me a clear explanation as to what this means and how it actually affects verity searching and indexing?

    Read the article

< Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >