Search Results

Search found 50994 results on 2040 pages for 'simple solution'.

Page 430/2040 | < Previous Page | 426 427 428 429 430 431 432 433 434 435 436 437  | Next Page >

  • Problem with loading compiled c code in R x64 using dyn.load

    - by Sacha Epskamp
    I recently went from a 32bit laptop to a 64bit desktop (both win7). I just found out that I get an error now when loading dll's using dyn.load. I guess this is a simple mistake and I am overlooking something. For example, I write this simple c function (foo.c): void foo( int *x) {*x = *x + 1;} Then compile it in command prompt: R CMD SHLIB foo.c Then in 32bit R I can use it in R: > setwd("R") > dyn.load("foo.dll") > .C("foo",as.integer(1)) [[1]] [1] 2 but in 64bit R I get: > dyn.load("foo.dll") Error in inDL(x, as.logical(local), as.logical(now), ...) : unable to load shared object 'C:/Users/Sacha/Documents/R/foo.dll': LoadLibrary failure: %1 is not a valid Win32 application. nd.

    Read the article

  • Conditional deserialization

    - by Yordan Pavlov
    I am still not sure whether the title of my question is correct and it most probably is not. However I have spent some time searching both the net and stackoverflow and I can not find a good description of the issue I am facing. Basically what I want to achieve is the ability to read some raw bytes and based on the value of some of them to interpret the rest in different ways. This is how TLV works in a way, you check the tag and depending on it - interpret the result. Of course I can always keep that logic in my C++ code, however I am looking for a solution which would move the logic out of the source code (maybe to some xml description). This would allow me to describe different encodings (protocols) more easily. I am familiar with Protocol Buffers and some other serialization libraries, however all of them solve different issue. They assume they are on both ends of the communication, while I want to describe the communication (sort of). Is such solution available, if no - why not? Are there some inherent difficulties I will face trying to implement it.

    Read the article

  • Just for fun (C# and C++)...time yourself [closed]

    - by Ted
    Possible Duplicate: What is your solution to the FizzBuzz problem? OK guys this is just for fun, no flamming allowed ! I was reading the following http://www.codinghorror.com/blog/2007/02/why-cant-programmers-program.html and couldn't believe the following sentence... " I've also seen self-proclaimed senior programmers take more than 10-15 minutes to write a solution." For those that can't be bothered to read the article, the background is this: ....I set out to develop questions that can identify this kind of developer and came up with a class of questions I call "FizzBuzz Questions" named after a game children often play (or are made to play) in schools in the UK. An example of a Fizz-Buzz question is the following: Write a program that prints the numbers from 1 to 100. But for multiples of three print "Fizz" instead of the number and for the multiples of five print "Buzz". For numbers which are multiples of both three and five print "FizzBuzz". SO I decided to test myself. I took 5 minutes in C++ and 3mins in c#! So just for fun try it and post your timings + language used! P.S NO UNIT TESTS REQUIRED, NO OUTSOURCING ALLOWED, SWITCH OFF RESHARPER! :-) P.S. If you'd like to post your source then feel free

    Read the article

  • I'm annoyed with asp .net mvc action links? Is there something better in MVC3?

    - by Jonathon Kresner
    After almost 3 years with mvc I'm scratching my head. Is it just me, or does the way we specify links in asp .net mvc suck? @Html.ActionLink("Log Off", "LogOff", "Account") In the previews for mvc 1 we had the funky generic action links which gave us intellisense and compile checking, which I LOVED. I know they removed them because of performance issues and because you could not actually guarantee that the route would resolve all the time... However the default way of doing it just doesn't make me feel safe enough in a big application. I've also used T4Mvc with MVC2, to be honest, I didn't really like it. It's not part of the Mvc framework and frustrating to develop with especially with source control in big teams and continuous integration builds. I guess I could also import Mvc Futures and keep using the generic types (it's probably what I'll do). I'm just about to start a very big project and was wondering what other people are thinking? Is anyone else annoyed with the options or has a new solution? It seems like ActionLinks are the most basic & frequently used feature. Shouldn't there be a good out of the box solution, we're just about to hit revision 3 of this framework.

    Read the article

  • How can I find the common ancestor of two nodes in a binary tree?

    - by Siddhant
    The Binary Tree here is not a Binary Search Tree. Its just a Binary Tree. The structure could be taken as - struct node { int data; struct node *left; struct node *right; }; The maximum solution I could work out with a friend was something of this sort - Consider this binary tree (from http://lcm.csa.iisc.ernet.in/dsa/node87.html) : The inorder traversal yields - 8, 4, 9, 2, 5, 1, 6, 3, 7 And the postorder traversal yields - 8, 9, 4, 5, 2, 6, 7, 3, 1 So for instance, if we want to find the common ancestor of nodes 8 and 5, then we make a list of all the nodes which are between 8 and 5 in the inorder tree traversal, which in this case happens to be [4, 9, 2]. Then we check which node in this list appears last in the postorder traversal, which is 2. Hence the common ancestor for 8 and 5 is 2. The complexity for this algorithm, I believe is O(n) (O(n) for inorder/postorder traversals, the rest of the steps again being O(n) since they are nothing more than simple iterations in arrays). But there is a strong chance that this is wrong. :-) But this is a very crude approach, and I'm not sure if it breaks down for some case. Is there any other (possibly more optimal) solution to this problem?

    Read the article

  • Using virtualization infrastructure for J2EE application distribution- viable alternative?

    - by Dan
    Our company builds custom J2EE web solutions. At the moment, we use standard J2EE distribution mechanisms (ear/war archives). Application servers are generally administered by our clients' IT departments and since we do not have complete control over the environment, a lot of entropy can be introduced into the solution. For example: latest app. server patch not applied conflicting third party libraries inside the app. server root server runtime and tuning parameters not configured (for example, number of connections in database pool) We are looking into using virtualization infrastructure for J2EE application distribution. Instead of sending the ear/war archive, we’d send image with application server node and our application preinstalled. Some of the benefits are same as using with using virtualization infrastructure in general, namely better use of hardware resources. For us, it reduces the entropy of hosting infrastructure - distributing VM should be less affected by hosting environment. So far, the downside I see can be in application server licenses, here they will have to use dedicated servers for our solution, but this is generally already done that way. Also, there is a complexity with maintaining virtualization infrastructure, but this is often something IT departments have more experience with than with administering and fine-tuning J2EE solutions. Anyone has experience with this model? What are the downsides? Will we not just replace one type of complexity with other?

    Read the article

  • Can TFS workspaces be used without being tied to a specific machine?

    - by GWLlosa
    So I've got a situation where we have a project with 10 developers. Each developer, when they come in for the day, is randomly issued a machine to use for development that day. The machine names are different, say DEV01 - DEV10. At the time that they are issued to the developers, the machines are identical, and no changes the developers make during the day are persisted on the machines (source code changes are stored in TFS, not locally). These are of course actually virtual machines, but that's not really relevant to the point at hand. The problem is that each morning, the developers run into 3 issues: 1) The machine that they are assigned may not be the same machine they were last assigned to. For example, DevMan A might have used DEV04 yesterday, and received DEV06 today. His workspace definitions are now tied to DEV06; he must create a new workspace, or migrate the old workspace to DEV04. 2) The machine that they are assigned may have been in use yesterday, and some of the mappings may conflict. For example, DevMan A might have DEV04 today, and wish to create a workspace mapping the project folder to "C:\MyProj\Solution". However, DevMan B had DEV04 yesterday, and he used the same project folder. TFS now complains. 3) This may be the first time they are on a given machine. They now need to recreate for this machine all of their source-control mappings for the new machine. All of these issues can be resolved in a straightforward fashion on a case-by-case basis, but it does sap some productivity from the morning. We'd much prefer if the TFS workspace definitions could be 'relaxed', such that they did not include the machine name in the definition somehow. Barring that, if anyone is aware of a solution to the above problems that can run automatically, or with limited user intervention, that would also be ideal.

    Read the article

  • architecture - centraled location for different modules (cms, webapplications, ...) - best practise

    - by NicoJuicy
    Let's just say that i want to create a cms + other online applications. I want them all to integrate into a central location, but they also have to be available seperately (not everyone want's more than the cms solution). Would i create a huge central application that contains all the database, which communicates through a webserice with the "standalone - integrated" modules? Or would i create them seperately and the only thing that the "central" application would do is syncing the information (eg. the cms and another solution can have the same tables (eg. clients or employees). Or do you have another idea? (i know i'm a little vague, but i can't "give" a lot of details because of work - contract). If someone has all the "packages" it should be possible for the central application to integrate all the modules at one place! Or if someone has more than 1 module, it should combine this on the website. What i thought is best, is that the central location contains only the users and their rights (eg. cms - all rights, ...), and the information get synced with every change. (module cms, adding a new client - store locally and send data to the central location, central location - send to modules = table clients updated everywhere) This way it is easy if someone only "bought" a module, they can sync it easily through the complete architecture. I hope i made myself clear!

    Read the article

  • Sql Server Replication: Snapshot vs Merge

    - by Zyphrax
    Background information Let's say I have two database servers, both SQL Server 2008. One is in my LAN (ServerLocal), the other one is on a remote hosting environment (ServerRemote). I have created a database on ServerLocal and have an exact copy of that database on ServerRemote. The database on ServerRemote is part of a web application and I would like to keep it's data up-to-date with the data in the database ServerLocal. ServerLocal is able to communicate with ServerRemote, this is one-way traffic. Communication from ServerRemote to ServerLocal isn't available. Current solution I thought it would be a nice solution to use replication. So I've made ServerLocal a publisher and subscriptions are pushed to the ServerRemote. This works fine, when a snapshot is transfered to ServerRemote the existing data will be purged and the ServerRemote database is once again an exact replica of the database on ServerLocal. The problem Records that exist on ServerRemote that don't exist on ServerLocal are removed. This doesn't matter for most of my tables but in some of my tables I'd like to keep the existing data (aspnet_users for instance), and update the records if necessary. What kind of replication fits my problem?

    Read the article

  • boost multi_index partial indexes

    - by Gokul
    Hi, I want to implement inside boost multi-index two sets of keys with same search criteria but different eviction criteria. Say i have two sets of data with same search condition, but one set needs a MRU(Most Recently Used) list of 100 and the other set requires a MRU of 200. Say the entry is like this class Student { int student_no; char sex; std::string address; }; The search criteria is student_no, but for sex='m', we need MRU of 200 and for sex='f', we need a MRU of 100. Now i have a solution where in i introduce a new ordered index to maintain ordering. For example the IndexSpecifierList will be something like this typedef multi_index_container< Student, indexed_by< ordered_unique< member<Student, int, &Student::student_no> >, ordered_unique< composite_key< member<Student, char, &Student::sex>, member<Student, int, &Student::sex_specific_student_counter> > > > > student_set Now everytime, i am inserting a new one, i have to take a equal_range for that using index 2 and remove the oldest one and if something is getting re-used, i have to update it by incrementing the counter. Is there a better solution to this kind of problem? Thanks, Gokul.

    Read the article

  • Best Java library(ies) for forgiving command interpreter

    - by vkraemer
    I am looking for a library or set of libraries that will help me write a forgiving command interpreter. A forgiving command interpreter would be a command interpreter which can deal with simple and even not so simple spelling and word order mistakes in the input. My goal is to have an interpreter which would take the input (a command) from a user and then: execute the command, if it is correct. apply corrections to the command, until a correct command is generated and then present that command to the user to confirm whether it is 'what the user meant'.

    Read the article

  • Using the filename for GET data and making the PHP page output as a JPG extension?

    - by Rob
    Alright, currently I'm using GD to create a PNG image that logs a few different things, depending on the GET data, for example: http://example.com/file.php?do=this will log one thing while http://example.com/file.php?do=that will log another thing. However, I'd like to do it without GET data, so instead http://example.com/dothis.php will log one thing, and http://example.com/dothat.php will log the other. But on top of that, I'd also like to make it accessible via the JPG file extension. I've seen this done but I can't figure out how. So that way http://example.com/dothis.JPG will log one thing, while http://example.com/dothat.JPG logs the other. The logging part is simple, of course. I simple need to know how to use filenames in place of the GET data and how to set the php file to be accessible via a jpg file extension.

    Read the article

  • Confirm bug Magento 1.4 'show/hide editor' in CMS

    - by latvian
    Hi When entering code in CMS static block(possible page as well) and in this code there is empty DIV tags such us: <a href="javascript:hide1(),show2(),hide3()"><div class="dropoff_button"></div></a> The DIV tags will be gone next time you open the block to edit. it will look as this <a href="javascript:hide1(),show2(),hide3()"> </a> without the div tags ...and saving again it modifies your code. I think it something to do with the 'show/hide editor'. By default it goes into the WYSIWYG editor, so when updating static block i don't see any other solution than 1."hide the editor' by clicking 'show/hide editor' 2.delete the old code from the editor 3. get code that doesn't miss the DIVs 4. Merge new code with code in 3 in some other editing software than magento 5. paste result in the magento editor, 6. Save Is this bug? What is your solution? Can i turn of WYSIWYG editor?

    Read the article

  • Primary language - QtC++, C#, Java?

    - by Airjoe
    I'm looking for some input, but let me start with a bit of background (for tl;dr skip to end). I'm an IT major with a concentration in networking. While I'm not a CS major nor do I want to program as a vocation, I do consider myself a programmer and do pretty well with the concepts involved. I've been programming since about 6th grade, started out with a proprietary game creation language that made my transition into C++ at college pretty easy. I like to make programs for myself and friends, and have been paid to program for local businesses. A bit about that- I wrote some programs for a couple local businesses in my senior year in high school. I wrote management systems for local shops (inventory, phone/pos orders, timeclock, customer info, and more stuff I can't remember). It definitely turned out to be over my head, as I had never had any formal programming education. It was a great learning experience, but damn was it crappy code. Oh yeah, by the way, it was all vb6. So, I've used vb6 pretty extensively, I've used c++ in my classes (intro to programming up to algorithms), used Java a little bit in another class (had to write a ping client program, pretty easy) and used Java for some simple Project Euler problems to help learn syntax and such when writing the program for the class. I've also used C# a bit for my own simple personal projects (simple programs, one which would just generate an HTTP request on a list of websites and notify if one responded unexpectedly or not at all, and another which just held a list of things to do and periodically reminded me to do them), things I would've written in vb6 a year or two ago. I've just started using Qt C++ for some undergrad research I'm working on. Now I've had some formal education, I [think I] understand organization in programming a lot better (I didn't even use classes in my vb6 programs where I really should have), how it's important to structure code, split into functions where appropriate, document properly, efficiency both in memory and speed, dynamic and modular programming etc. I was looking for some input on which language to pick up as my "primary". As I'm not a "real programmer", it will be mostly hobby projects, but will include some 'real' projects I'm sure. From my perspective: QtC++ and Java are cross platform, which is cool. Java and C# run in a virtual machine, but I'm not sure if that's a big deal (something extra to distribute, possibly a bit slower? I think Qt would require additional distributables too, right?). I don't really know too much more than this, so I appreciate any help, thanks! TL;DR Am an avocational programmer looking for a language, want quick and straight forward development, liked vb6, will be working with database driven GUI apps- should I go with QtC++, Java, C#, or perhaps something else?

    Read the article

  • Simplest way to extend doctrine for MVC Models

    - by RobertPitt
    Im developing my own framework that uses namespaces. Doctrine is already integrated into my auto loading system and im now at the stage where ill be creating the model system for my application Usually i would create a simple model like so: namespace Application\Models; class Users extends \Framework\Models\Database{} which would inherit all the default database model methods, But with Doctrine im still learning how it all works, as its not just a simple DBAL. I need to understand whats the part of doctrine my classes would extend where i can do the following: namespace Application\Models; class Users Extends Doctrine\Something\Table { public $__table_name = "users"; } And thus within the controller i would be able to do the following: public function Display($uid) { $User = $this->Model->Users->findOne(array("id" => (int)$id)); } Anyone help me get my head around this ?

    Read the article

  • [LaTeX] positions of page numbers, position of chapter headings, chapters AND Table of Contents, Ref

    - by kaikanmonaco
    I am writing my PhD thesis (120+ pages) in latex, the deadline is approaching and I am struggling with layout problems. I am using the documentstyle book. I am posting both problems in this one thread because I am not sure if the solution might be related to both problems or not. Problems are: 1.) The page numbers are mostly located on the top-right of each page (this is correct and where I want them to be). However, only on the first page of chapters and on the first page of what I call "special chapters", the page number is located bottom-centered. With "special chapters" I mean: List of Contents, List of Figures, List of Tables, References, Index. My university will not accept the thesis like this. The page number must ALWAYS be top-right one each page, even if the page is the first page of a chapter or the first page of something like the List of Contents. How can I fix this? 2.) On the first page of chapters and "special chapters" (List of Contents...), the chapter title is located far too low on the page. This is the standard layout of LaTeX with documentstyle book I think. However, the chapter title must start at the very top of the page! I.e. the same height as the normal text on the pages that follow. I mean the chapter title, not the header. I.e., if there is a chapter called "Chapter 1 Dynamics of foobar under mechanical stress" then that text has to start from the top the page, but right now it starts several centimeters below the top. How can I fix this? Have tried all kinds of things to no effect, I'd be very thankful for a solution! Thanks.

    Read the article

  • $(ajax) using webservice response

    - by loviji
    Hello, I have a simple ajax query. $(document).ready(function () { var infManipulate = true; var tableID=<%=TableID %>; var userName="<%=CurrentUserName%>"; $.ajax({ type: "POST", url: "../../WS/Permission.asmx/CanManipulate", data: "tableID=" + tableID + "&userName=" + userName + "", success: function (msg) { //**how can I know there method CanManipulate returned true or false?** msg; debugger; }, error: function (msg) { } }); And simple web-method [WebMethod] public bool CanManipulate(int tableID, string userName) { //some op. return prm.haveThisMatch(userName, tableID, "InfManipulate"); } how can I know there method CanManipulate returned true or false value in $(ajax) content?

    Read the article

  • Add page item to either Joomla, Drupal or Wordpress

    - by HP
    I just want to design this very simple website. Basically there are multiple pages A, B, C... and each page has item A1, A2.. or B1, B2... These items follow fixed HTML template (table,image) and only specific content fields (name, body text..) can be change in the back-end. Backend users can add new pages or new items each page. Does anyone know some plugin for either Joomla, Drupal or Wordpress that can do the simple purpose above? Thanks

    Read the article

  • Extracting DCT coefficients from encoded images and video

    - by misha
    Is there a way to easily extract the DCT coefficients (and quantization parameters) from encoded images and video? Any decoder software must be using them to decode block-DCT encoded images and video. So I'm pretty sure the decoder knows what they are. Is there a way to expose them to whomever is using the decoder? I'm implementing some video quality assessment algorithms that work directly in the DCT domain. Currently, the majority of my code uses OpenCV, so it would be great if anyone knows of a solution using that framework. I don't mind using other libraries (perhaps libjpeg, but that seems to be for still images only), but my primary concern is to do as little format-specific work as possible (I don't want to reinvent the wheel and write my own decoders). I want to be able to open any video/image (H.264, MPEG, JPEG, etc) that OpenCV can open, and if it's block DCT-encoded, to get the DCT coefficients. In the worst case, I know that I can write up my own block DCT code, run the decompressed frames/images through it and then I'd be back in the DCT domain. That's hardly an elegant solution, and I hope I can do better. Presently, I use the fairly common OpenCV boilerplate to open images: IplImage *image = cvLoadImage(filename); // Run quality assessment metric The code I'm using for video is equally trivial: CvCapture *capture = cvCaptureFromAVI(filename); while (cvGrabFrame(capture)) { IplImage *frame = cvRetrieveFrame(capture); // Run quality assessment metric on frame } cvReleaseCapture(&capture); In both cases, I get a 3-channel IplImage in BGR format. Is there any way I can get the DCT coefficients as well?

    Read the article

  • 3D Camera looking at arbitrary point, but never rolling to do so

    - by Nektarios
    (In C#/XNA 4.0) I have Vector3 cameraPosition, Vector3 targetPosition I want to have my camera look at the target, always 'facing' it, but never rolling So roll always is neutral, to view the target the camera always either adjusts pitch or yaw I've tried countless combinations of methods and information I find here and on the web but I haven't found anything that works properly. I think my issue may be my 'Up' vector (which I've tried .Up, 1,0,0, 0,1,0, 0,0,1) When I move my camera I do: CameraPosition += moveSpeed * vectorToAdd; UpdateViewMatrix(); UpdateViewMatrix() is.. well, I've tried everything I have seen. At most simple... View = Matrix.CreateLookAt(CameraPosition, targetPosition, upVector); Where upVector has been Vector3.Up, 1, 0, 0; 0, 1, 0; 0, 0, 1, or other more 'proper' attempts to get my actual up vector. This sounds like it's probably my problem.. This should be dead simple, help!

    Read the article

  • Multiple leaf methods problem in composite pattern

    - by Ondrej Slinták
    At work, we are developing an PHP application that would be later re-programmed into Java. With some basic knowledge of Java, we are trying to design everything to be easily re-written, without any headaches. Interesting problem came out when we tried to implement composite pattern with huge number of methods in leafs. What are we trying to achieve (not using interfaces, it's just an example): class Composite { ... } class LeafOne { public function Foo( ); public function Moo( ); } class LeafTwo { public function Bar( ); public function Baz( ); } $c = new Composite( Array( new LeafOne( ), new LeafTwo( ) ) ); // will call method Foo in all classes in composite that contain this method $c->Foo( ); It seems like pretty much classic Composite pattern, but problem is that we will have quite many leaf classes and each of them might have ~5 methods (of which few might be different than others). One of our solutions, which seems to be the best one so far and might actually work, is using __call magic method to call methods in leafs. Unfortunately, we don't know if there is an equivalent of it in Java. So the actual question is: Is there a better solution for this, using code that would be eventually easily re-coded into Java? Or do you recommend any other solution? Perhaps there's some different, better pattern I could use here. In case there's something unclear, just ask and I'll edit this post.

    Read the article

  • Is there any LAME c++ wraper\simplifier (working on Linux Mac and Win from pure code)?

    - by Ole Jak
    So I want to create simple pcm to mp3 C++ project. I want it to use LAME. I love LAME but It is realy biiig. so I need some kind of OpenSource working from pure code with pure lame code workflow simplifier. So to say I give it File with PCM and DEST file. Call something like LameSimple.ToMP3(file with PCM, File with MP3 , 44100, 16, MP3, VBR); ore such thing in 4 - 5 lines (examples ofcourse should exist) and I have vhat I needed It should be light, simple, powerfool, opensource, crossplatform. Is there any thing like this?!?

    Read the article

  • How do I see if an established socket is stuck on a server that's expecting input?

    - by Parker
    I have a script that scans ports for open proxy servers. Problem is if it encounters a login program (specifically telnet) then it hangs there forever since it doesn't know what to do and eventually the server closes the connection. The simple solution would be to create a bunch of cases. If telnet, do this. If SSH, do that. If something else, blah blah blah. I'd like an umbrella solution since the script is not a high priority for me. The script, as it is now, is available at http://parkrrr.net/socks/scan.phps On a small scale (the page maybe averages 15 hits/day) it's fine but on a larger scale I'd be worried about a lot of open zombie sockets. Swapping the !$strpos doesn't work since servers can return more information than what you requested (headers, ads, etc). Only accepting a fixed number of bytes (as opposed to appending until EOF, which it does now) from the $fgets also does not seem to work. I am sure this is where it gets stuck: while (!feof($fp)) { $data.=fgets($fp,512); } But what can I do? Any other suggestions/warnings would also be welcomed.

    Read the article

  • Why are web developers so keen to use lists ?

    - by Bob
    I've been developing for a while and often develop sites using menu tabs. And I can't figure out why so many web developers like using lists < ul < li etc rather than just using plain old divs. I can make menus in divs which are simple and work perfectly in every browser. With lists, I'm usually trying to hack it one way or another to get it work properly. So my question is simple : why should I use lists to create my menus instead of divs ?

    Read the article

< Previous Page | 426 427 428 429 430 431 432 433 434 435 436 437  | Next Page >