Search Results

Search found 2390 results on 96 pages for 'intelligent agent'.

Page 62/96 | < Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >

  • Massive 404 attack with non existent URLs. How to prevent this?

    - by tattvamasi
    The problem is a whole load of 404 errors, as reported by Google Webmaster Tools, with pages and queries that have never been there. One of them is viewtopic.php, and I've also noticed a scary number of attempts to check if the site is a WordPress site (wp_admin) and for the cPanel login. I block TRACE already, and the server is equipped with some defense against scanning/hacking. However, this doesn't seem to stop. The referrer is, according to Google Webmaster, totally.me. I have looked for a solution to stop this, because it isn't certainly good for the poor real actual users, let alone the SEO concerns. I am using the Perishable Press mini black list (found here), a standard referrer blocker (for porn, herbal, casino sites), and even some software to protect the site (XSS blocking, SQL injection, etc). The server is using other measures as well, so one would assume that the site is safe (hopefully), but it isn't ending. Does anybody else have the same problem, or am I the only one seeing this? Is it what I think, i.e., some sort of attack? Is there a way to fix it, or better, prevent this useless resource waste? EDIT I've never used the question to thank for the answers, and hope this can be done. Thank you all for your insightful replies, which helped me to find my way out of this. I have followed everyone's suggestions and implemented the following: a honeypot a script that listens to suspect urls in the 404 page and sends me an email with user agent/ip, while returning a standard 404 header a script that rewards legitimate users, in the same 404 custom page, in case they end up clicking on one of those urls. In less than 24 hours I have been able to isolate some suspect IPs, all listed in Spamhaus. All the IPs logged so far belong to spam VPS hosting companies. Thank you all again, I would have accepted all answers if I could.

    Read the article

  • Understanding the Value of SOA

    - by Mala Narasimharajan
    Written By: Debra Lilley, ACE Director, Fusion Applications Again I want to talk from my area of expertise of Fusion Applications and talk about their design fundamentals. If you look at the table below and start at the bottom Oracle have defined all of the business objects e.g. accounts, people, customers, invoices etc. used by Fusion Applications; each of these objects contain all of the information required and can be expanded if necessary.  That Oracle have created for each of these business objects every action that is needed for the applications e.g. all the actions to create a new customer, checking to see if it exists, credit checking with D&B (Dun & Bradstreet < http://www.dnb.co.uk/> ) , creating the record, notifying those required etc. Each of these actions is a stand-alone web service. Again you can create a new actions or subscribe to an external provided web service e.g. the D&B check. The diagram also shows that all of development of Fusion Applications is from their Fusion Middleware offerings. Then the Intelligent Business Process is the order in which you run these actions, this is Service Orientated Architecture, SOA. Not only is SOA used to orchestrate actions within Fusion Applications it is also used in the integration of Fusion Applications with the rest of the Oracle stable of applications such as EBS, PeopleSoft, JDE and Siebel. The other applications are written with propriety development tools so how do they work with SOA? It’s a very simple answer, with the introduction of the Oracle SOA platform each process within these applications was made available to be called as a web service. I won’t go into technically how that is done but what’s known as a wrapper to allow each of them to act in this way was added. Finally at the top of the diagram are the questions that each Fusion Application process must answer, and this is the ‘special’ sauce that makes them so good, the User Experience, but that is a topic for another day, or you can read about it in my blog http://debrasoracle.blogspot.co.uk/2014/04/going-on-record-about-fusion-apps-cloud.html or Oracle’s own UX blog https://blogs.oracle.com/usableapps/ The concept behind AppAdvantage is not new the idea that Oracle technology can add value to your Oracle applications investments is pretty fundamental. Nishit Rao who is in AppAdvantage team provided myself and other ACE Directors with demo kits so that we could demonstrate SOA running with the applications. The example I learnt to build was that of the EBS inventory open interface. The simple concept is that request records can be added to a table and an import run that creates these as transactions in inventory. What’s SOA allows you to do is to add to the table from any source and then run this process automatically whereas traditionally you had to run the process at regular intervals because you didn’t know if the table was empty or not. This may just sound like a different way of doing the same thing but if the process is critical for your business then the interval was very small and the process run potentially many times unnecessarily. Using SOA it only happened when necessary without any delay. So in my post today I’ve talked about how SOA is used with Fusion Applications and in the linking with more traditional applications but that is only the tip of the iceberg of potential, your applications are just part of your IT systems and SOA can orchestrate your data across all of them; the beauty of open standards.  Debra Lilley, Fusion Champion, UKOUG Board Member, Fusion User Experience Advocate and ACE Director.  Lilley has 18 years experience with Oracle Applications, with E Business Suite since 9.4.1, moving to Business Intelligence Team Lead and Oracle Alliance Director. She has spoken at over 100 conferences worldwide and posts at debrasoraclethoughts

    Read the article

  • New functionality in TFS Build Manager &ndash; Managing Triggers and Build Resources

    - by Jakob Ehn
    Yesterday we pushed out a new release (August 2012) of the Community TFS Build Extension, including a new version of the Community TFS Build Manager (1.0.4.6) The two big new features in the Build Manager in this release are: Set Triggers It is now possible to select one or more build definitions and update the triggers for them in one simple operation: You’ll note that we have started collapsing the context menu a bit, the list of commands are getting long! When selecting the Trigger command, you’ll see a dialog where the options should be self-explanatory: The only thing missing here is the Scheduled trigger option, you’ll have to do that using Team Explorer for now.   Manage Build Resources The other feature is that it is now possible to view the build controllers and agents in your current collection and also perform some actions against them. The new functionality is available by select the Build Resources item in the drop down menu: Selecting this, you’ll see a (sort of) hierarchical view of the build controllers and their agents: In this view you can quickly see all the resources and their status. You can also view the build directory of each build agent and the tags that are associated with them. On the action menu, you can enable and disable both agents and controllers (several at a time), and you can also select to remove them. By selecting Manage, you’ll be presented with the standard Manage Controller dialog from Visual Studio where you can set the rest of the properties. Hopefully we’ll be able to implement most of the existing functionality so that we can remove that menu option Our plan is to add more functionality to this view, such as adding new agents/controllers, restarting build service hosts, maybe view diagnostic information such as disk space and error logs.   Hope you’ll find the new functionality useful. Remember to log any bugs and feature requests on the CodePlex site. Happy building!

    Read the article

  • which technology or strategy a new / inexperienced freelancer should use to earn profit? [closed]

    - by w3softdev
    this question has re-posted by me in the following group if you find suitable to answer this question then please click on the link attached here or copy paste this in your browser.. http://answers.onstartups.com/questions/32767/which-technology-or-strategy-a-new-inexperienced-freelancer-should-use-to-earn Well it is my very first Question in this section and i really don't know whether my query relate to this section or not. anyway i have some awkward query. (however, it is like a bit story but i guess it is necessary to know some background knowledge of me.) Actually I m fresh recent grad who has just started his freelancing work. In due course i have got a project to develop website for a middle scale business (travel agent). As I don't trust on my client whether he will pay to me in full or not after the completion of website, i want to use cheaper and efficient technology so that how much he would pay I could got at least few units of % of profit. As i have learnt ASP.NET and when I inquired about the expense in Hosting of my website i got the recommendation to develop my web app using the combination PHP and MYSQL instead the asp.net + ms sql. And the problem is I don't know PHP. should I learn PHP and or work in what i m comfortable with and should try to cover whole deserved money. (as it is my first project so i m also advised that i may got some loss in starting but contrary to this i don't want to go in loss and want to earn appropriate profit)

    Read the article

  • Does Subversion have an analogue to VSS's links?

    - by bta
    I am migrating a Visual SourceSafe code repository to Subversion and I am running into a problem. Here is a simplified layout of our current source code tree (in VSS): project_root\ |-libs\ |-tools\ |-arch_1\ | |-include | |-source |-arch_2\ |-include |-source My problem is in our two arch_ folders. Each arch_ folder will be built for a different hardware architecture, but the contents of the two folders are practically identical. The files in arch_2 are merely VSS links to the files in arch_1, with only a small handful of exceptions. Work is generally checked into and out of the arch_1 folder, and the VSS links make sure that any code checked in here is updated in the arch_2 folder as well. Moving to Subversion, is there anything that will behave like VSS's links? That is, is there a way to have two files in separate folders magically associated with one another such that they will always be in sync with each other (changes to one will affect the other as well)? Note: I know the correct answer here is to fix the build system. The build system on this project was pieced together roughly a decade ago, back when our compiler/build system wasn't intelligent enough to compile the same folder full of source code for two different architectures. Thanks to make and updated compilers, we can re-write the build system to eliminate this dependency on two parallel source folders. However, this will take time that we don't have at the moment (we are losing our license to our VSS server and are being forced to migrate on rather short notice). I am hoping to find a Subversion solution to this problem because at the moment, our time would be much better spent making the migration run smoothly than re-writing the build system (which is next on my to-do list!). Thank you for your help!

    Read the article

  • Recover files from corrupt filesystem

    - by Emile 81
    My situation: I have an older 80GB IDE internal hdd, with a few files on that I would like very much to recover: some word documents some latex documents (text files) and pictures (png, jpg, eps files) some other text documents and visual studio project files I had backed them (not the latex ones though) up using svn, but have not committed lately, and would loose a lot of work if I cant recover. the hdd seems to have lost its filesystem, i have no idea how it came about. I know it has/had 3 NTFS partitions, i know the files i want are on the second or third partition. I read http://superuser.com/questions/81877/recover-hard-disk-data Partition Find and Mount did not see all the partitions using intelligent scan TestDisk does (i think), I followed the step by step instructions here, but when I try to list the files it says: "Can't open filesystem, filesystem seems damaged." I'm not sure how to proceed here, as TestDisks wiki does not contain this error message afaik. I don't know if the hdd is gonna fail, or some prog has caused the filesystem to be corrupt, the hdd doesnt make a sound, so i guess that's good. I would like some guidance so I don't accidentally cause more damage. (eg. is it ok to let testdisk write the filesystem to disk? I'm pretty the partitions are listed ok, but not 100%)

    Read the article

  • NHibernate Generators

    - by Dan
    What is the best tool for generating Entity Class and/or hbm files and/or sql script for NHibernate. This list below is from http://www.hibernate.org/365.html, which is the best any why? Moregen Free, Open Source (GPL) O/R Generator that can merge into existing Visual Studio Projects. Also merges changes to generated classes. NConstruct Lite Free tool for generating NHibernate O/R mapping source code. Different databases support (Microsoft SQL Server, Oracle, Access). GENNIT NHibernate Code Generator Free/Commercial Web 2.0 code generation of NHibernate code using WYSIWYG online UML designer. GenWise Studio with NHibernate Template Commercial product; Imports your existing database and generates all XML and Classes, including factories. It can also generate a asp.net web-application for your NHibernate BO-Layer automatically. HQL Analyzer and hbm.xml GUI Editor ObjectMapper by Mats Helander is a mapping GUI with NHibernate support MyGeneration is a template-based code generator GUI. Its template library includes templates for generating mapping files and classes from a database. AndroMDA is an open-source code generation framework that uses Model Driven Architecture (MDA) to transform UML models into deployable components. It supports generation of data access layers that use NHibernate as their persistence framework. CodeSmith Template for NH NHibernate Helper Kit is a VS2005 add-in to generate classes and mapping files. NConstruct - Intelligent Software Factory Commercial product; Full .NET C# source code generation for all tiers of the information system trough simple wizard procedure. O/R mapping based on NHibernate. For both WinForms and ASP.NET 2.0.

    Read the article

  • Getting started with character and text processing (encoding, regular expressions)

    - by TK
    I'd like to learn foundations of encodings, characters and text. Understanding these is important for dealing with a large set of text whether that are log files or text source for building algorithms for collective intelligence. My current knowledge is pretty basic: something like "As long as I use UTF-8, I'm okay." I don't say I need to learn about advanced topics right away. But I need to know: Bit and bytes level knowledge of encodings. Characters and alphabets not used in English. Multi-byte encodings. (I understand some Chinese and Japanese. And parsing them is important.) Regular expressions. Algorithm for text processing. Parsing natural languages. I also need an understanding of mathematics and corpus linguistics. The current and future web (semantic, intelligent, real-time web) needs processing, parsing and analyzing large text. I'm looking for some resources (maybe books?) that get me started with some of the bullets. (I find many helpful discussion on regular expressions here on Stack Overflow. So, you don't need to suggest resources on that topic.)

    Read the article

  • Any Red5 Working Example Project for 0.9 release

    - by Daryl
    I'm trying to find 1, just 1, working sample project for Red5 that's updated to work against the latest 0.9 release without missing jars and other nonsense. Right now, it's at v0.9 and the libs are different from other versions. They have 5 pathetic examples on their website, but all were built with the older versions, and who knows which version since they don't seem to be putting any effort into updating or marketing their open source project. For these 5 old examples, I could use the Add External JARS feature to try and add libs from previous versions, they don't mention which versions they were built against and I'm not going to try each previous version to see which works (I already did and nothing works). P.S. Voted the worst documented project on the planet. It's hilarious, one of their developers posted a few videos on youtube, without bothering to attach a sample zip for the get-started project he's demoing. No offense, but that's seriously #(#$$#($!@#*. Not a good start for any tech decision maker to assess a technology for commercial use. Anyone who's more intelligent and can shed some light on behalf of these fools?

    Read the article

  • How to estimate the contribution of an individual to a software project?

    - by Amit Kumar
    I work on a software project and would like to estimate the percentage out of the total contribution that I have put in the development of the software. Is there some tool doing this? Such a tool can be useful for appraisals or negotiations, for example. After all, we work for money (yes, not only money, put the point remains). I think there is enough hand-waving for the most important things. The estimation is very subjective (at least to me now) but I do not know of any tool that provides even a subjective estimate. I know of Sloccount that spells out the total effort using the lines of code but not on per-developer basis. My idea of an ideal tool for this purpose would: measure the complexity of the code (more complex is more effort, but more effort is not necessarily more contribution) measure the decomposibility/flexibility of the software (more decomposable is better) how much library code is used -- using library code speeds up the development process, increases the associated risk and requires the developer to know from before or learn about the library. be intelligent enough to differentiate between "who wrote the code", "who copied the code" and "who indented the code". It is difficult to differentiate between the complexity in the implementation and the intrinsic complexity of the problem. Perhaps a comparison can be made with an equivalent open source counterpart if there is, or for each submodule separately. If there is no such tool, is there no merit in having such a tool? Or do you believe in "I do work, I do not measure"? It takes time after all. Perhaps the project manager should do this estimation continuously, say, weekly. Are there any standards? Yes, standardization is difficult because every project has a different goal, but difficult does not mean it is not useful.

    Read the article

  • BITS client fails to specify HTTP Range header

    - by user256890
    Our system is designed to deploy to regions with unreliable and/or insufficient network connections. We build our own fault tolerating data replication services that uses BITS. Due to some security and maintenance requirements, we implemented our own ASP.NET file download service on the server side, instead of just letting IIS serving up the files. When BITS client makes an HTTP download request with the specified range of the file, our ASP.NET page pulls the demanded file segment into memory and serve that up as the HTTP response. That is the theory. ;) This theory fails in artificial lab scenarios but I would not let the system deploy in real life scenarios unless we can overcome that. Lab scenario: I have BITS client and the IIS on the same developer machine, so practically I have enormous network "bandwidth" and BITS is intelligent enough to detect that. As BITS client discovers the unlimited bandwidth, it gets more and more "greedy". At each HTTP request, BITS wants to grasp greater and greater file ranges (we are talking about downloading CD iso files, videos), demanding 20-40MB inside a single HTTP request, a size that I am not comfortable to pull into memory on the server side as one go. I can overcome that simply by giving less than demanded. It is OK. However, BITS gets really "confident" and "arrogant" demanding files WITHOUT specifying the download range, i.e., it wants the entire file in a single request, and this is where things go wrong. I do not know how to answer that response in the case of a 600MB file. If I just provide the starting 1MB range of the file, BITS client keeps sending HTTP requests for the same file without download range to continue, it hammers its point that it wants the entire file in one go. Since I am reluctant to provide the entire file, BITS gives up after several trials and reports error. Any thoughts?

    Read the article

  • Modelling Business Logic with NON-Techies

    - by cbmeeks
    The setup: Winform/ASP.NET MVC projects. Learning NHibernate SQL-Server driven apps I work with clients that have no idea how to model an application. That's what I'm for. However, we have lots of conflicts with validation, mis-understandings, etc. For example, the client will ask for an order entry screen. The screen should require a "product". That's fine and dandy. However, the client didn't know to tell me that the user can't order a product of "Class A" unless it's Tuesday. Or, they need a time entry screen. 2 days before it's rolled into production, they casually forgot to mention that certain activities are only valid for certain situations. These situations being a week of coding. That's of course, some crude examples (not by much!). But the problem is getting these non-technical clients to layout their business logic. They somehow didn't realize that the "Class A" problem would come up two weeks later, etc. I'm all for agile programming but is there an easy way to somehow make business logic like this extremely easy to implement and change on almost a daily basis? I of course am splitting the project into hopefully intelligent pieces, using NHibernate, etc. But making this BI logic so dynamic is really making it hard to project timelines, etc. Any suggestions? I know there will never be a perfect client (or a perfect provider) but how do you guys deal with the constant mis-understandings? Thanks.

    Read the article

  • C++ Serialization Clean XML Similar to XSTREAM

    - by disown
    I need to write a linux c++ app which saves it settings in XML format (for easy hand editing) and also communicates with existing apps through XML messages over sockets and HTTP. Problem is that I haven't been able to find any intelligent libs to help me, I don't particular feel like writing DOM or SAX code just to write and read some very simple messages. Boost Serialization was almost a match, but it adds a lot of boost-specific data to the xml it generates. This obviously doesn't work well for interchange formats. I'm wondering if it is possible to make Boost Serialization or some other c++ serialization library generate clean xml. I don't mind if there are some required extra attributes - like a version attribute, but I'd really like to be able to control their naming and also get rid of 'features' that I don't use - tracking_level and class_id for instance. Ideally I would just like to have something similar to xstream in Java. I am aware of the fact that c++ lacks introspection and that it is therefore necessary to do some manual coding - but it would be nice if there was a clean solution to just read and write simple XML without kludges! If this cannot be done I am also interested in tools where the XML schema is the canonical resource (contract first) - a good JAXB alternative to C++. So far I have only found commercial solutions like CodeSynthesis XSD. I would prefer open source solutions. I have tried gSoap - but it generates really ugly code and it is also SOAP-specific. In desperation I also started looking at alternative serialization formats for protobuffers. This exists - but only for Java! It really surprises me that protocol buffers seems to be a better supported data interchange format than XML. I'm going mad just finding libs for this app and I really need some new ideas. Anyone?

    Read the article

  • How do I forward `<Ctrl>-<Tab>` in Konsole?

    - by M. Tibbits
    I want to use intelligent tabbing in Emacs in C++ mode, but I also want to be able to insert a tab character when necessary. From other posts, I gather that the easiest way is to bind <Ctrl>-<Tab> to indent. However, it appears that Konsole in KUbuntu won't forward the <Ctrl>? My current .emacs file contains: (defun my-c-mode-common-hook () (setq c++-tab-always-indent t) (setq tab-width 4) (setq indent-tabs-mode t) ) (add-hook 'c-mode-common-hook 'my-c-mode-common-hook) (global-set-key [C-tab] 'self-insert-command) So I believe that this will bind <Ctrl>-<Tab> to inserting a tab character. However, when I run: <Ctrl>-h k <Ctrl>-<Tab> Emacs only reports that I pressed the tab key. Is there some option to Konsole (which I have searched through to no avail) or global preferences in KUbuntu that I need to set so that the <Ctrl>- is also forwarded? (It certainly forwards all of the other <Ctrl>-blah commands.)

    Read the article

  • Scale image to fit text boxes around borders

    - by nispio
    I have the following plot in Matlab: The image size may vary, and so may the length of the text boxes at the top and left. I dynamically determine the strings that go in these text boxes and then create them with: [M,N] = size(img); imagesc((1:N)-0.5,(1:M)-0.5, img > 0.5); axis image; grid on; colormap([1 1 1; 0.5 0.5 0.5]); set(gca,'XColor','k','YColor','k','TickDir','out') set(gca,'XTick',1:N,'XTickLabel',cell(1,N)) set(gca,'YTick',1:N,'YTickLabel',cell(1,N)) for iter = 1:M text(-0.5, iter-0.5, sprintf(strL, br{iter,:}), ... 'FontSize',16, ... 'HorizontalAlignment','right', ... 'VerticalAlignment','middle', ... 'Interpreter','latex' ); end for iter = 1:N text(iter-0.5, -0.5, {bc{:,iter}}, ... 'FontSize',16, ... 'HorizontalAlignment','center', ... 'VerticalAlignment','bottom', ... 'Interpreter','latex' ); end where br and bc are cell arrays containing the appropriate numbers for the labels. The problem is that most of the time, the text gets clipped by the edges of the figure. I am using this as a workaround: set(gca,'Position',[0.25 0.25 0.5 0.5]); As you can see, I am simply adding a larger border around the plot so that there is more room for the text. While this scaling works for one zoom level, if I maximize my plot window I get way too much empty space, and if I shrink my plot window, I get clipping again. Is there a more intelligent way to add these labels to use the minimum amount of space while making sure that the text does not get clipped?

    Read the article

  • Using game of life or other virtual environment for artificial (intelligence) life simulation? [clos

    - by Berlin Brown
    One of my interests in AI focuses not so much on data but more on biologic computing. This includes neural networks, mapping the brain, cellular-automata, virtual life and environments. Described below is an exciting project that includes develop a virtual environment for bots to evolve in. "Polyworld is a cross-platform (Linux, Mac OS X) program written by Larry Yaeger to evolve Artificial Intelligence through natural selection and evolutionary algorithms." http://en.wikipedia.org/wiki/Polyworld " Polyworld is a promising project for studying virtual life but it still is far from creating an "intelligent autonomous" agent. Here is my question, in theory, what parameters would you use create an AI environment? Possibly a brain environment? Possibly multiple self contained life organisms that have their own "brain" or life structures. I would like a create a spin on the game of life simulation. What if you have a 64x64 game of life grid. But instead of one grid, you might have N number of grids. The N number of grids are your "life force" If all of the game of life entities die in a particular grid then that entire grid dies. A group of "grids" makes up a life form. I don't have an immediate goal. First, I want to simulate an environment and visualize what is going on in the environment with OpenGL and see if there are any interesting properties to the environment. I then want to add "scarce resources" and see if the AI environment can manage resources adequately.

    Read the article

  • Why do mozilla and webkit prepend -moz- and -webkit- to CSS3 rules?

    - by egarcia
    CSS3 rules bring lots of interesting features. Take border-radius, for example. The standard says that if you write this rule: div.rounded-corners { border-radius: 5px; } I should get a 5px border radius. But neither mozilla nor webkit implement this. However, they implement the same thing, with the same parameters, with a different name (-moz-border-radius and -webkit-border-radius, respectively). In order to satisfy as many browsers as possible, you end up with this: div.rounded-corners { border-radius: 5px; -moz-border-radius: 5px; -webkit-border-radius: 5px; } I can see two obvious disadvantages: Copy-paste code. This has obvious risks that I will not discuss here. The W3C CSS validator will not validate these rules. At the same time, I don't see any obvious advantages. I believe that the people behind mozilla and webkit are more intelligent than myself. There must be some good reasons to have things structured this way. It's just that I can't see them. So, I must ask you people: why is this?

    Read the article

  • ASP MVC - Routing Required?

    - by evo_9
    I've been reading up on MVC2 which came in VS2010 and it sounds pretty interesting. I'm actually in the middle of a large multi-tenant application project, and have just started coding the UI. I'm considering changing to MVC as I'm not that far along at this point. I have some questions about the Routing capabilities, namely are they required to use MVC or can I more or less ignore Routing? Or do I have to setup a default routing record that will make things work like standard ASPX (as far as routing alone is concerned)? The reason why I don't want to use Routing is because I've already defined a custom URL 'rewrite' mechanism of my own (which fires on session_start). In addition, I'm using jquery and opens-standards for the entire UI, and MVC's aspx overhead-free approach seems like a better fit based on how I've already started to build the application (I am not using viewstate at all, for example). I guess my big concern is whether the routing can be ignored, of if I will have to re-implement my custom URL rewriting to work with MVC, and if that's the case, how would I do that? As a new Routing routine, or stick with the session_start (if that's even possible?). Lastly, I don't want to use anything even remotely 'intelligent/readable' for the url - for a site like StackOverflow, the readability of the URL is a positive, but the opposite is true if it's not a public website like this one. In fact, it would seem to me that the more friendly MVC routing URL (which indirectly show method names) could pose a security risk on a private, non-public website app like I'm developing. For all these reasons I would love to use the lightweight aspects of MVC but skip the Routing entirely - is this possible?

    Read the article

  • Where to post code for open source usage?

    - by Douglas
    I've been working for a few weeks now with the Google Maps API v3, and have done a good bit of development for the map I've been creating. Some of the things I've done have had to be done to add usability where there previously was not any, at least not that I could find online. Essentially, I made a list of what had to be done, searched all over the web for the ways to do what I needed, and found that some were not(at the time) possible(in the "grab an example off the web" sense). Thus, in my working on this map, I have created a number of very useful tools, which I would like to share with the development community. Is there anywhere I could use as a hub, apart from my portfolio ( http://dougglover.com ), to allow people to view and recycle my work? I know how hard it can be to need to do something, and be unable to find the solution elsewhere, and I don't think that if something has been done before, it should necessarily need to be written again and again. Hence open source code, right? Firstly, I was considering coming on here and asking a question, and then just answering it. Problem there is I assume that would just look like a big reputation grab. If not, please let me know and I'll go ahead and do that so people here can see it. Other suggestions appreciated. Some stuff I've made: A (new and improved) LatLng generator Works quicker, generates LatLng based on position of a draggable marker Allows searching for an address to place the marker on/near the desired location(much better than having to scroll to your location all the way from Siberia) Since it's a draggable marker, double-clicking zooms in, instead of creating a new LatLng marker like the one I was originally using The ability to create entirely custom "Smart Paths" Plot LatLng points on the map which connect to each other just like they do using the actual Google Maps Using Dijkstra's algorithm with Javascript, the routing is intelligent and always gives the shortest possible route, using the points provided Simple, easy to read multi-dimensional array system allows for easily adding new points to the grid Any suggestions, etc. appreciated.

    Read the article

  • Uncrackable anti-piracy protection/DRM even possible? [closed]

    - by some guy
    I hope that this is programming-related enough. You have probably heard about Ubisofts recent steps against piracy. (New DRM requires a constant connection to the Ubisoft server) Many people including me see this as intolerable because the only ones suffering from it at the end are the paying customers. Now to the actual question(s): Ubisoft justified this by calling this mechanism "Uncrackable, only playable by the paying customers". Is a so called uncrackable DRM even possible? You can reverse-engineer and modify everything, even if it takes long. Isn't Ubisoft already lying by calling something not crackable? I mean, hey - With the game you get all its content (textures, models, you know) and some anti-piracy mechanism hardcoded into it. How could that be "uncrackable"? You can just patch the unwanted mechanisms out ---- "Pirates" play the cracked game without problems and the paying customers are the idiots by having constant problems with the game and being unable to play it without a (working) internet connection. What are the points Ubisoft sees in this? If they are at least a bit intelligent and informed they know their anti-piracy protection won't last long. All they get is lower sales, angry customers and happy pirates and crackers.

    Read the article

  • Significance in R

    - by Gemsie
    Ok, this is quite hard to explain, but I'm at a complete loss what to do. I'm a relative newcomer to R and although I can completely admire how powerful it is, I'm not too good at actually using it.... Basically, I have some very contrived data that I need to analyse (it wasn't me who chose this, I can assure you!). I have the right and left hand lengths of lots of people, as well as some numeric data that shows their sociability. Now I would like to know if people who have significantly different lengths of hand are more or less sociable than those who have the same (leading into the research that 'symmetrical' people are more sociable and intelligent, etc. I have got as far as loading the data into R, then I have no idea where to go from there. How on Earth do I start to separate those who are close to symmetrical to those who aren't to then start to do the analysis? Ok, using Sasha's great advice, I did the cor.test and got the following: Pearson's product-moment correlation data: measurements$l.hand - measurements$r.hand and measurements$sociable t = 0.2148, df = 150, p-value = 0.8302 alternative hypothesis: true correlation is not equal to 0 95 percent confidence interval: -0.1420623 0.1762437 sample estimates: cor 0.01753501 I have never used this test before, so am unsure how to intepret it...you wouldn't think I was on my fourth Scientific degree would you?! :(

    Read the article

  • MySQL query puzzle - finding what WOULD have been the most recent date

    - by Hank
    I've looked all over and haven't yet found an intelligent way to handle this, though I feel sure one is possible: One table of historical data has quarterly information: CREATE TABLE Quarterly ( unique_ID INT UNSIGNED NOT NULL, date_posted DATE NOT NULL, datasource TINYINT UNSIGNED NOT NULL, data FLOAT NOT NULL, PRIMARY KEY (unique_ID)); Another table of historical data (which is very large) contains daily information: CREATE TABLE Daily ( unique_ID INT UNSIGNED NOT NULL, date_posted DATE NOT NULL, datasource TINYINT UNSIGNED NOT NULL, data FLOAT NOT NULL, qtr_ID INT UNSIGNED, PRIMARY KEY (unique_ID)); The qtr_ID field is not part of the feed of daily data that populated the database - instead, I need to retroactively populate the qtr_ID field in the Daily table with the Quarterly.unique_ID row ID, using what would have been the most recent quarterly data on that Daily.date_posted for that data source. For example, if the quarterly data is 101 2009-03-31 1 4.5 102 2009-06-30 1 4.4 103 2009-03-31 2 7.6 104 2009-06-30 2 7.7 105 2009-09-30 1 4.7 and the daily data is 1001 2009-07-14 1 3.5 ?? 1002 2009-07-15 1 3.4 && 1003 2009-07-14 2 2.3 ^^ then we would want the ?? qtr_ID field to be assigned '102' as the most recent quarter for that data source on that date, and && would also be '102', and ^^ would be '104'. The challenges include that both tables (particularly the daily table) are actually very large, they can't be normalized to get rid of the repetitive dates or otherwise optimized, and for certain daily entries there is no preceding quarterly entry. I have tried a variety of joins, using datediff (where the challenge is finding the minimum value of datediff greater than zero), and other attempts but nothing is working for me - usually my syntax is breaking somewhere. Any ideas welcome - I'll execute any basic ideas or concepts and report back.

    Read the article

  • Timing related crash when unloading a DLL?

    - by fbrereto
    I know I'm reaching for straws here, but this one is a mystery... any pointers or help would be most welcome, so I'm appealing to those more intelligent than I: We have a crash exhibited in our release binaries only. The crash takes place as the binary is bringing itself down and terminating sub-libraries upon which it depends. Its ability to be reproduced is dependent on the machine- some are 100% reliable in reproducing the crash, some don't exhibit the issue at all, and some are in between. The crash is deep within one of the sublibraries, and there is a good likelihood the stack is corrupt by the time the rubble can be brought into a debugger (MSVC 2008 SP1) to be examined. Running the binary under the debugger prevents the bug from happening, as does remote debugging, as does (of all things) connecting to the machine via VNC. We have tried to install the Microsoft Driver Development Kit, and doing so also squelches the bug. What would be the next best place to look? What tools would be best in this circumstance? Does it sound like a race condition, or something else?

    Read the article

  • In Emacs, how can I use imenu more sensibly with C#?

    - by Cheeso
    I've used emacs for a long time, but I haven't been keeping up with a bunch of features. One of these is speedbar, which I just briefly investigated now. Another is imenu. Both of these were mentioned in in-emacs-how-can-i-jump-between-functions-in-the-current-file? Using imenu, I can jump to particular methods in the module I'm working in. But there is a parse hierarchy that I have to negotiate before I get the option to choose (with autocomplete) the method name. It goes like this. I type M-x imenu and then I get to choose Using or Types. The Using choice allows me to jump to any of the using statements at the top level of the C# file (something like imports statements in a Java module, for those of you who don't know C#). Not super helpful. I choose Types. Then I have to choose a namespace and a class, even though there is just one of each in the source module. At that point I can choose between variables, types, and methods. If I choose methods I finally get the list of methods to choose from. The hierarchy I traverse looks like this; Using Types Namespace Class Types Variables Methods method names Only after I get to the 5th level do I get to select the thing I really want to jump to: a particular method. Imenu seems intelligent about the source module, but kind of hard to use. Am I doing it wrong?

    Read the article

  • How do I compare two PropertyInfos or methods reliably?

    - by Rob Ashton
    Same for methods too: I am given two instances of PropertyInfo or methods which have been extracted from the class they sit on via GetProperty or GetMember etc, (or from a MemberExpression maybe). I want to determine if they are in fact referring to the same Property or the same Method so (propertyOne == propertyTwo) or (methodOne == methodTwo) Clearly that isn't going to actually work, you might be looking at the same property, but it might have been extracted from different levels of the class hierarchy (in which case generally, propertyOne != propertyTwo) Of course, I could look at DeclaringType, and re-request the property, but this starts getting a bit confusing when you start thinking about Properties/Methods declared on interfaces and implemented on classes Properties/Methods declared on a base class (virtually) and overridden on derived classes Properties/Methods declared on a base class, overridden with 'new' (in IL world this is nothing special iirc) At the end of the day, I just want to be able to do an intelligent equality check between two properties or two methods, I'm 80% sure that the above bullet points don't cover all of the edge cases, and while I could just sit down, write a bunch of tests and start playing about, I'm well aware that my low level knowledge of how these concepts are actually implemented is not excellent, and I'm hoping this is an already answered topic and I just suck at searching. The best answer would give me a couple of methods that achieve the above, explaining what edge cases have been taken care of and why :-)

    Read the article

< Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >