Search Results

Search found 9439 results on 378 pages for 'telepath needed'.

Page 59/378 | < Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >

  • Learn Many Languages

    - by Phil Factor
    Around twenty-five years ago, I was trying to solve the problem of recruiting suitable developers for a large business. I visited the local University (it was a Technical College then). My mission was to remind them that we were a large, local employer of technical people and to suggest that, as they were in the business of educating young people for a career in IT, we should work together. I anticipated a harmonious chat where we could suggest to them the idea of mentioning our name to some of their graduates. It didn’t go well. The academic staff displayed a degree of revulsion towards the whole topic of IT in the world of commerce that surprised me; tweed met charcoal-grey, trainers met black shoes. However, their antipathy to commerce was something we could have worked around, since few of their graduates were destined for a career as university lecturers. They asked me what sort of language skills we needed. I tried ducking the invidious task of naming computer languages, since I wanted recruits who were quick to adapt and learn, with a broad understanding of IT, including development methodologies, technologies, and data. However, they pressed the point and I ended up saying that we needed good working knowledge of C and BASIC, though FORTRAN and COBOL were, at the time, still useful. There was a ghastly silence. It was as if I’d recommended the beliefs and practices of the Bogomils of Bulgaria to a gathering of Cardinals. They stared at me severely, like owls, until the head of department broke the silence, informing me in clipped tones that they taught only Modula 2. Now, I wouldn’t blame you if at this point you hurriedly had to look up ‘Modula 2′ on Wikipedia. Based largely on Pascal, it was a specialist language for embedded systems, but I’ve never ever come across it in a commercial business application. Nevertheless, it was an excellent teaching language since it taught modules, scope control, multiprogramming and the advantages of encapsulating a set of related subprograms and data structures. As long as the course also taught how to transfer these skills to other, more useful languages, it was not necessarily a problem. I said as much, but they gleefully retorted that the biggest local employer, a defense contractor specializing in Radar and military technology, used nothing but Modula 2. “Why teach any other programming language when they will be using Modula 2 for all their working lives?” said a complacent lecturer. On hearing this, I made my excuses and left. There could be no meeting of minds. They were providing training in a specific computer language, not an education in IT. Twenty years later, I once more worked nearby and regularly passed the long-deserted ‘brownfield’ site of the erstwhile largest local employer; the end of the cold war had led to lean times for defense contractors. A digger was about to clear the rubble of the long demolished factory along with the accompanying growth of buddleia and thistles, in order to lay the infrastructure for ‘affordable housing’. Modula 2 was a distant memory. Either those employees had short working lives or they’d retrained in other languages. The University, by contrast, was thriving, but I wondered if their erstwhile graduates had ever cursed the narrow specialization of their training in IT, as they struggled with the unexpected variety of their subsequent careers.

    Read the article

  • Webcast Q&A: ResCare Solves Content Lifecycle Challenges with Oracle WebCenter

    - by Kellsey Ruppel
    Last week we had the fourth webcast in our WebCenter in Action webcast series, "ResCare Solves Content Lifecycle Challenges with Oracle WebCenter", where customer Joe Lichtefeld from ResCare and Wayne Boerger & Doug Thompson from Oracle Partner TEAM Informatics shared how Oracle WebCenter is powering allowing ResCare to solve content lifecycle challenges, reduce compliance and business risks, and increase adoption of intranet as primary business communication tool In case you missed it, here's a recap of the Q&A.   Joe Lichtefeld, ResCare  Q: Did you run into any issues in the deployment of the platform?A: We experienced very few issues when implementing the content management and search functionalities. There were some challenges in determining the metadata structure. We tried to find a fine balance between having enough fields to provide the functionality needed, but trying to limit the impact to the contributing members.  Q: What has been the biggest benefit your end users have seen?A: The biggest benefit to date is two-fold. Content on the intranet can be maintained by the individual contributors more timely than in our old process of all requests being updated by IT. The other big benefit is the ability to find the most current version of a document instead of relying on emails and phone calls to track down the "current" version. Q: Was there any resistance internally when implementing the solution? If so, how did you overcome that?A: We experienced very little resistance. Most of our community groups were eager to be able to contribute and maintain their information. We had the normal hurdles of training and follow-up training with implementing a new system and process. As our second phase rolled out access to all employees, we have received more positive feedback on the accessibility of information. Wayne Boerger & Doug Thompson, TEAM Informatics Q: Can you integrate multiple repositories with the Google Search Appliance? Yes, the Google Search Appliance is designed to index lots of different repositories, from both public and internal sources. There are included connectors to many repositories, such as SharePoint, databases, file systems, LDAP, and with the TEAM GSA Connector and the Oracle Content Server. And the index for these repositories can be configured into different collections depending on the use cases that each customer has, and really, for each need within a customer environment. Q: How many different filters can you add when the search results are returned? A: Presuming this question is about the filtering on the search results. You can add as many filters as you like and it can be done by collection or any number of other criteria. Most importantly, customers now have the ability to limit the returned content by a set metadata value. Q: With the TEAM Sites Connector, what types of content can you sync? A: There’s really no limit; if it can be checked into the content server, then it is eligible for sync into Sites.  So basically, any digital file that has relevance to a Sites implementation can be checked into the WC Content central repository and then the connector can/will manage it. Q: Using the Connector, are there any limitations around where in Sites that synced content can be used? A: There are no limitations about where it can be used. When setting up your environment to use it, you just need to think through the different destinations on the Sites side that might use the content; that way you’ve got the right information to create the rules needed for the connector. If you missed the webcast, be sure to catch the replay to see a live demonstration of WebCenter in action!  ResCare Solves Content Lifecycle Challenges with Oracle WebCenter from Oracle WebCenter

    Read the article

  • AI to move custom-shaped spaceships (shape affecting movement behaviour)

    - by kaoD
    I'm designing a networked turn based 3D-6DOF space fleet combat strategy game which relies heavily on ship customization. Let me explain the game a bit, since you need to know a bit about it to set the question. What I aim for is the ability to create your own fleet of ships with custom shapes and attached modules (propellers, tractor beams...) which would give advantages and disadvantages to each ship, so you have lots of different fleet distributions. E.g., long ship with two propellers at the side would let the ship spin around that plane easily, bigger ships would move slowly unless you place lots of propellers at the back (therefore spending more "construction" points and energy when moving, and it will only move fast towards that direction.) I plan to balance all the game around this feature. The game would revolve around two phases: orders and combat phase. During the orders phase, you command the different ships. When all players finish the order phase, the combat phase begins and the ship orders get resolved in real-time for some time, then the action pauses and there's a new orders phase. The problem comes when I think about player input. To move a ship, you need to turn on or off different propellers if you want to steer, travel forward, brake, rotate in place... These propellers don't have to work at their whole power, so you can achieve more movement combinations with less propellers. I think this approach is a bit boring. The player doesn't want to fiddle with motors or anything, you just want to MOVE and KILL. The way I intend the player to give orders to these ships is by a destination and a rotation, and then the AI would calculate the correct propeller power to achive that movement and rotation. Propulsion doesn't have to be the same throught the entire turn calculation (after the orders have been given) so it would be cool if the ships reacted as they move, adjusting the power of the propellers for their needs dynamically, but it may be too hard to implement and it's not really needed for the game to work. In both cases, how would that AI decide which propellers to activate for the best (or at least not worst) trajectory to be achieved? I though about some approaches: Learning AI: The ship types would learn about their movement by trial and error, adjusting their behaviour with more uses, and finally becoming "smart". I don't want to get involved THAT far in AI coding, and I think it can be frustrating for the player (even if you can let it learn without playing.) Pre-calculated timestep movement: Upon ship creation, ALL possible movements are calculated for each propeller configuration and power for a given delta-time. Memory intensive, ugly, bad. Pre-calculated trajectories: The same as above but not for each delta-time but the whole trajectory, which would then be fitted as much as possible. Requires a fixed propeller configuration for the whole combat phase and is still memory intensive, ugly and bad. Continuous brute forcing: The AI continously checks ALL possible propeller configurations throughout the entire combat phase, precalculates a few time steps and decides which is the best one based on that. Con: what's good now might not be that good later, and it's too CPU intensive, ugly, and bad too. Single brute forcing: Same as above, but only brute forcing at the beginning of the simulation, so it needs constant propeller configuration throughout the entire combat phase. Coninuous angle check: This is not a full movement method, but maybe a way to discard "stupid" propeller configurations. Given the current propeller's normal vector and the final one, you can approximate the power needed for the propeller based on the angle. You must do this continuously throughout the whole combat phase. I figured this one out recently so I didn't put in too much thought. A priori, it has the "what's good now might not be that good later" drawback too, and it doesn't care about the other propellers which may act together to make a better propelling configuration. I'm really stuck here. Any ideas?

    Read the article

  • What common interface would be appropriate for these game object classes?

    - by Jefffrey
    Question A component based system's goal is to solve the problems that derives from inheritance: for example the fact that some parts of the code (that are called components) are reused by very different classes that, hypothetically, would lie in a very different branch of the inheritance tree. That's a very nice concept, but I've found out that CBS is often hard to accomplish without using ugly hacks. Implementations of this system are often far from clean. But I don't want to discuss this any further. My question is: how can I solve the same problems a CBS try to solve with a very clean interface? (possibly with examples, there are a lot of abstract talks about the "perfect" design already). Context Here's an example I was going for before realizing I was just reinventing inheritance again: class Human { public: Position position; Movement movement; Sprite sprite; // other human specific components }; class Zombie { Position position; Movement movement; Sprite sprite; // other zombie specific components }; After writing that I realized I needed an interface, otherwise I would have needed N containers for N different types of objects (or to use boost::variant to gather them all together). So I've thought of polymorphism (move what systems do in a CBS design into class specific functions): class Entity { public: virtual void on_event(Event) {} // not pure virtual on purpose virtual void on_update(World) {} virtual void on_draw(Window) {} }; class Human : public Entity { private: Position position; Movement movement; Sprite sprite; public: virtual void on_event(Event) { ... } virtual void on_update(World) { ... } virtual void on_draw(Window) { ... } }; class Zombie : public Entity { private: Position position; Movement movement; Sprite sprite; public: virtual void on_event(Event) { ... } virtual void on_update(World) { ... } virtual void on_draw(Window) { ... } }; Which was nice, except for the fact that now the outside world would not even be able to know where a Human is positioned (it does not have access to its position member). That would be useful to track the player position for collision detection or if on_update the Zombie would want to track down its nearest human to move towards him. So I added const Position& get_position() const; to both the Zombie and Human classes. And then I realized that both functionality were shared, so it should have gone to the common base class: Entity. Do you notice anything? Yes, with that methodology I would have a god Entity class full of common functionality (which is the thing I was trying to avoid in the first place). Meaning of "hacks" in the implementation I'm referring to I'm talking about the implementations that defines Entities as simple IDs to which components are dynamically attached. Their implementation can vary from C-stylish: int last_id; Position* positions[MAX_ENTITIES]; Movement* movements[MAX_ENTITIES]; Where positions[i], movements[i], component[i], ... make up the entity. Or to more C++-style: int last_id; std::map<int, Position> positions; std::map<int, Movement> movements; From which systems can detect if an entity/id can have attached components.

    Read the article

  • The World of SQL Database Deployment

    - by GGBlogger
    In my early development days, I used Microsoft Access for building databases. It made things easy since I only needed to package the database with the installation package so my clients would have access to it. When we began the development of a new package in Visual Studio .NET I decided to use SQL Server Express. It was free and provided good tools - also free. I thought it was a tremendous idea until it came time to distribute our new software! What a surprise. The nightmare Ah, the choices! Detach the database and have the client reattach it to a newly installed – oh wait. FIRST my new client needs to download and install SQL Server Express with SQL Server Management Studio. That’s not a great thing, but it is one more nightmare step for users who may have other versions of SQL installed. Then the question became – do we detach and reattach or do we do a backup. It was too late (bad planning) to revert to Microsoft Access but we badly needed a simple way to package and distribute both the database AND sample contents. Red Gate to the rescue It took me a while to find an answer but I did find it in a package called SQL Packager sold by a relatively unpublicized company in England called Red Gate. They call their products “ingeniously simple” and I must agree with that description. With SQL Packager you point to the database (more in a minute) you want to distribute. A few mouse clicks and dialogs and you have an executable file that you can ship virtually anywhere and virtually any way which, when run, installs the database on your destination SQL Server instance! It really is that simple. Easier to show than tell Let’s explore a hypothetical case. Let’s say you have a local SQL database of customers and you have decided you want to share it with your subsidiaries or partners. Here is the underlying screen you will see on starting SQL Packager. There are a bunch of possibilities here but I’m going to keep this relatively simple. At this point I simply want to illustrate the simplicity of generating an executable to deliver your database. You will notice that you can set up a new package, edit an existing package or change a bunch of options. Start SQL packager And the following is the default dialog you get on startup. In the next dialog, I’ve selected the Server and Database. I’ve also selected Windows Authentication. Pressing Next causes SQL Packager to run a number of checks and produce a report. Now you’re given a comprehensive list of what is going to be packaged and you’re allowed to change it if you desire. I’ve never made any changes here so I can’t really make any suggestions. The just illustrates the comprehensive nature of so many Red Gate products including this one. Clicking Next gives you still further options. SQL Packager then works its magic and shows you a dialog with the results. Packager then gives you a dialog of the scripts it has generated. The capture above only shows 1 of 4 tabs. Finally pressing Next gives you the option to generate a .NET executable of a C# project. I’ve only generated an executable so I’m not in a position to tell you what the C# project looks like. That may be the subject of further discussions. You can rename the package and tell SQL Packager where to save it. I’ve skipped a lot but this will serve to illustrate the comprehensive (and ingenious) things Red Gate does. All in all, it’s a superb way to distribute populated SQL databases. Oh – we’ll save running the resulting executable for later also but believe me it’s insanely simple.

    Read the article

  • The Growing Importance of Network Virtualization

    - by user12608550
    The Growing Importance of Network Virtualization We often focus on server virtualization when we discuss cloud computing, but just as often we neglect to consider some of the critical implications of that technology. The ability to create virtual environments (or VEs [1]) means that we can create, destroy, activate and deactivate, and more importantly, MOVE them around within the cloud infrastructure. This elasticity and mobility has profound implications for how network services are defined, managed, and used to provide cloud services. It's not just servers that benefit from virtualization, it's the network as well. Network virtualization is becoming a hot topic, and not just for discussion but for companies like Oracle and others who have recently acquired net virtualization companies [2,3]. But even before this topic became so prominent, Solaris engineers were working on technologies in Solaris 11 to virtualize network services, known as Project Crossbow [4]. And why is network virtualization so important? Because old assumptions about network devices, topology, and management must be re-examined in light of the self-service, elasticity, and resource sharing requirements of cloud computing infrastructures. Static, hierarchical network designs, and inter-system traffic flows, need to be reconsidered and quite likely re-architected to take advantage of new features like virtual NICs and switches, bandwidth control, load balancing, and traffic isolation. For example, traditional multi-tier Web services (Web server, App server, DB server) that share net traffic over Ethernet wires can now be virtualized and hosted on shared-resource systems that communicate within a larger server at system bus speeds, increasing performance and reducing wired network traffic. And virtualized traffic flows can be monitored and adjusted as needed to optimize network performance for dynamically changing cloud workloads. Additionally, as VEs come and go and move around in the cloud, static network configuration methods cannot easily accommodate the routing and addressing flexibility that VE mobility implies; virtualizing the network itself is a requirement. Oracle Solaris 11 [5] includes key network virtualization technologies needed to implement cloud computing infrastructures. It includes features for the creation and management of virtual NICs and switches, and for the allocation and control of the traffic flows among VEs [6]. Additionally it allows for both sharing and dedication of hardware components to network tasks, such as allocating specific CPUs and vNICs to VEs, and even protocol-specific management of traffic. So, have a look at your current network topology and management practices in view of evolving cloud computing technologies. And don't simply duplicate the physical architecture of servers and connections in a virtualized environment…rethink the traffic flows among VEs and how they can be optimized using Oracle Solaris 11 and other Oracle products and services. [1] I use the term "virtual environment" or VE here instead of the more commonly used "virtual machine" or VM, because not all virtualized operating system environments are full OS kernels under the control of a hypervisor…in other words, not all VEs are VMs. In particular, VEs include Oracle Solaris zones, as well as SPARC VMs (previously called LDoms), and x86-based Solaris and Linux VMs running under hypervisors such as OEL, Xen, KVM, or VMware. [2] Oracle follows VMware into network virtualization space with Xsigo purchase; http://www.mercurynews.com/business/ci_21191001/oracle-follows-vmware-into-network-virtualization-space-xsigo [3] Oracle Buys Xsigo; http://www.oracle.com/us/corporate/press/1721421 [4] Oracle Solaris 11 Networking Virtualization Technology, http://www.oracle.com/technetwork/server-storage/solaris11/technologies/networkvirtualization-312278.html [5] Oracle Solaris 11; http://www.oracle.com/us/products/servers-storage/solaris/solaris11/overview/index.html [6] For example, the Solaris 11 'dladm' command can be used to limit the bandwidth of a virtual NIC, as follows: dladm create-vnic -l net0 -p maxbw=100M vnic0

    Read the article

  • Should I expose IObservable<T> on my interfaces?

    - by Alex
    My colleague and I have dispute. We are writing a .NET application that processes massive amounts of data. It receives data elements, groups subsets of them into blocks according to some criterion and processes those blocks. Let's say we have data items of type Foo arriving some source (from the network, for example) one by one. We wish to gather subsets of related objects of type Foo, construct an object of type Bar from each such subset and process objects of type Bar. One of us suggested the following design. Its main theme is exposing IObservable objects directly from the interfaces of our components. // ********* Interfaces ********** interface IFooSource { // this is the event-stream of objects of type Foo IObservable<Foo> FooArrivals { get; } } interface IBarSource { // this is the event-stream of objects of type Bar IObservable<Bar> BarArrivals { get; } } / ********* Implementations ********* class FooSource : IFooSource { // Here we put logic that receives Foo objects from the network and publishes them to the FooArrivals event stream. } class FooSubsetsToBarConverter : IBarSource { IFooSource fooSource; IObservable<Bar> BarArrivals { get { // Do some fancy Rx operators on fooSource.FooArrivals, like Buffer, Window, Join and others and return IObservable<Bar> } } } // this class will subscribe to the bar source and do processing class BarsProcessor { BarsProcessor(IBarSource barSource); void Subscribe(); } // ******************* Main ************************ class Program { public static void Main(string[] args) { var fooSource = FooSourceFactory.Create(); var barsProcessor = BarsProcessorFactory.Create(fooSource) // this will create FooSubsetToBarConverter and BarsProcessor barsProcessor.Subscribe(); fooSource.Run(); // this enters a loop of listening for Foo objects from the network and notifying about their arrival. } } The other suggested another design that its main theme is using our own publisher/subscriber interfaces and using Rx inside the implementations only when needed. //********** interfaces ********* interface IPublisher<T> { void Subscribe(ISubscriber<T> subscriber); } interface ISubscriber<T> { Action<T> Callback { get; } } //********** implementations ********* class FooSource : IPublisher<Foo> { public void Subscribe(ISubscriber<Foo> subscriber) { /* ... */ } // here we put logic that receives Foo objects from some source (the network?) publishes them to the registered subscribers } class FooSubsetsToBarConverter : ISubscriber<Foo>, IPublisher<Bar> { void Callback(Foo foo) { // here we put logic that aggregates Foo objects and publishes Bars when we have received a subset of Foos that match our criteria // maybe we use Rx here internally. } public void Subscribe(ISubscriber<Bar> subscriber) { /* ... */ } } class BarsProcessor : ISubscriber<Bar> { void Callback(Bar bar) { // here we put code that processes Bar objects } } //********** program ********* class Program { public static void Main(string[] args) { var fooSource = fooSourceFactory.Create(); var barsProcessor = barsProcessorFactory.Create(fooSource) // this will create BarsProcessor and perform all the necessary subscriptions fooSource.Run(); // this enters a loop of listening for Foo objects from the network and notifying about their arrival. } } Which one do you think is better? Exposing IObservable and making our components create new event streams from Rx operators, or defining our own publisher/subscriber interfaces and using Rx internally if needed? Here are some things to consider about the designs: In the first design the consumer of our interfaces has the whole power of Rx at his/her fingertips and can perform any Rx operators. One of us claims this is an advantage and the other claims that this is a drawback. The second design allows us to use any publisher/subscriber architecture under the hood. The first design ties us to Rx. If we wish to use the power of Rx, it requires more work in the second design because we need to translate the custom publisher/subscriber implementation to Rx and back. It requires writing glue code for every class that wishes to do some event processing.

    Read the article

  • Why is 0 false?

    - by Morwenn
    This question may sound dumb, but why does 0 evaluates to false and any other [integer] value to true is most of programming languages? String comparison Since the question seems a little bit too simple, I will explain myself a little bit more: first of all, it may seem evident to any programmer, but why wouldn't there be a programming language - there may actually be, but not any I used - where 0 evaluates to true and all the other [integer] values to false? That one remark may seem random, but I have a few examples where it may have been a good idea. First of all, let's take the example of strings three-way comparison, I will take C's strcmp as example: any programmer trying C as his first language may be tempted to write the following code: if (strcmp(str1, str2)) { // Do something... } Since strcmp returns 0 which evaluates to false when the strings are equal, what the beginning programmer tried to do fails miserably and he generally does not understand why at first. Had 0 evaluated to true instead, this function could have been used in its most simple expression - the one above - when comparing for equality, and the proper checks for -1 and 1 would have been done only when needed. We would have considered the return type as bool (in our minds I mean) most of the time. Moreover, let's introduce a new type, sign, that just takes values -1, 0 and 1. That can be pretty handy. Imagine there is a spaceship operator in C++ and we want it for std::string (well, there already is the compare function, but spaceship operator is more fun). The declaration would currently be the following one: sign operator<=>(const std::string& lhs, const std::string& rhs); Had 0 been evaluated to true, the spaceship operator wouldn't even exist, and we could have declared operator== that way: sign operator==(const std::string& lhs, const std::string& rhs); This operator== would have handled three-way comparison at once, and could still be used to perform the following check while still being able to check which string is lexicographically superior to the other when needed: if (str1 == str2) { // Do something... } Old errors handling We now have exceptions, so this part only applies to the old languages where no such thing exist (C for example). If we look at C's standard library (and POSIX one too), we can see for sure that maaaaany functions return 0 when successful and any integer otherwise. I have sadly seen some people do this kind of things: #define TRUE 0 // ... if (some_function() == TRUE) { // Here, TRUE would mean success... // Do something } If we think about how we think in programming, we often have the following reasoning pattern: Do something Did it work? Yes -> That's ok, one case to handle No -> Why? Many cases to handle If we think about it again, it would have made sense to put the only neutral value, 0, to yes (and that's how C's functions work), while all the other values can be there to solve the many cases of the no. However, in all the programming languages I know (except maybe some experimental esotheric languages), that yes evaluates to false in an if condition, while all the no cases evaluate to true. There are many situations when "it works" represents one case while "it does not work" represents many probable causes. If we think about it that way, having 0 evaluate to true and the rest to false would have made much more sense. Conclusion My conclusion is essentially my original question: why did we design languages where 0 is false and the other values are true, taking in account my few examples above and maybe some more I did not think of? Follow-up: It's nice to see there are many answers with many ideas and as many possible reasons for it to be like that. I love how passionate you seem to be about it. I originaly asked this question out of boredom, but since you seem so passionate, I decided to go a little further and ask about the rationale behind the Boolean choice for 0 and 1 on Math.SE :)

    Read the article

  • Cocoa equivalent of the Carbon method getPtrSize

    - by Michael Minerva
    I need to translate the a carbon method into cocoa into and I am having trouble finding any documentation about what the carbon method getPtrSize really does. From the code I am translating it seems that it returns the byte representation of an image but that doesn't really match up with the name. Could someone give me a good explanation of this method or link me to some documentation that describes it. The code I am translating is in a common lisp implementation called MCL that has a bridge to carbon (I am translating into CCL which is a common lisp implementation with a Cocoa bridge). Here is the MCL code (#_before a method call means that it is a carbon method): (defmethod COPY-CONTENT-INTO ((Source inflatable-icon) (Destination inflatable-icon)) ;; check for size compatibility to avoid disaster (unless (and (= (rows Source) (rows Destination)) (= (columns Source) (columns Destination)) (= (#_getPtrSize (image Source)) (#_getPtrSize (image Destination)))) (error "cannot copy content of source into destination inflatable icon: incompatible sizes")) ;; given that they are the same size only copy content (setf (is-upright Destination) (is-upright Source)) (setf (height Destination) (height Source)) (setf (dz Destination) (dz Source)) (setf (surfaces Destination) (surfaces Source)) (setf (distance Destination) (distance Source)) ;; arrays (noise-map Source) ;; accessor makes array if needed (noise-map Destination) ;; ;; accessor makes array if needed (dotimes (Row (rows Source)) (dotimes (Column (columns Source)) (setf (aref (noise-map Destination) Row Column) (aref (noise-map Source) Row Column)) (setf (aref (altitudes Destination) Row Column) (aref (altitudes Source) Row Column)))) (setf (connectors Destination) (mapcar #'copy-instance (connectors Source))) (setf (visible-alpha-threshold Destination) (visible-alpha-threshold Source)) ;; copy Image: slow byte copy (dotimes (I (#_getPtrSize (image Source))) (%put-byte (image Destination) (%get-byte (image Source) i) i)) ;; flat texture optimization: do not copy texture-id -> destination should get its own texture id from OpenGL (setf (is-flat Destination) (is-flat Source)) ;; do not compile flat textures: the display list overhead slows things down by about 2x (setf (auto-compile Destination) (not (is-flat Source))) ;; to make change visible we have to reset the compiled flag (setf (is-compiled Destination) nil))

    Read the article

  • Use RegEx in Java to extract parameters in between parentheses

    - by lars_bx
    I'm writing a utility to extract the names of header files from JSPs. I have no problem reading the JSPs line by line and finding the lines I need. I am having a problem extracting the specific text needed using regex. After looking at many similar questions I'm hitting a brick wall. An example of the String I'll be matching from within is: <jsp:include page="<%=Pages.getString(\"MY_HEADER\")%>" flush="true"></jsp:include> All I need is MY_HEADER for this example. Any time I have this tag: <%=Pages.getString I need what comes between this: <%=Pages.getString(\" and this: )%> Here is what I have currently (which is not working, I might add) : String currentLine; while ((currentLine = fileReader.readLine()) != null) { Pattern pattern = Pattern.compile("<%=Pages\\.getString\\(\\\\\"([^\\\\]*)"); Matcher matcher = pattern.matcher(currentLine); while(matcher.find()) { System.out.println(matcher.group(1).toString()); }} I need to be able to use the Java RegEx API and regex to extract those header names. Any help on this issue is greatly appreciated. Thanks! EDIT: Resolved this issue, thankfully. The tricky part was, after being given the right regex, it had to be taken into account that the String I was feeding to the regex was always going to have two " / " characters ( (/"MY_HEADER"/) ) that needed to be escaped in the pattern. Here is what worked (thanks to the help ;-)): Pattern pattern = Pattern.compile("<%=Pages\\.getString\\(\\\\\"([^\\\\\"]*)");

    Read the article

  • Create an ASMX web service from a WSDL file

    - by metanaito
    I have a WSDL file and I am trying to create a web service that conforms to the WSDL. I've created clients using WSDL files that consume an existing service, but I've never created a web service that needed to follow a specific WSDL. I've gone as far as using: wsdl.exe mywsdl.wsdl /l:VB /serverInterface Now I've got a .vb file generated from that WSDL. However I am not sure what I'm supposed to do with this VB file. It looks like it's got a public interface in there but no class that implements the interface. It also has a bunch of partial classes for the types in the WSDL. I was expecting there to be some sort of stub where I put in the code to complete the service calls. I've only created simple web services before and none of them used public interfaces so I'm unfamiliar with what is going on here. At this point I'm not sure how I use the generated .vb file and make it work with an .asmx file and what additional coding is needed to complete the interface.

    Read the article

  • Maven WTP webapp with maven-eclipse-plugin but without m2eclipse

    - by chrsk
    I use the eclipse:eclipse goal to generate an Eclipse Project environment. The deployment works fine. The goal creates the var classpath entries for all needed dependencies. With m2eclipse there was the Maven Container which defines an export folder which was WEB-INF/lib for me. But i don't want to rely on m2eclipse so i don't want to use it anymore. the class path entries which are generated by eclipse:eclipse goal don't have such a export folder. While booting the servlet container with WTP it publishes all resources and classes except the libraries to the context. Whats missing to publish the needed libs, or isn't that possible without m2eclipse integration? IDE Configuration Eclipse 3.5 JEE Galileo Maven 2 m2eclipse The maven-eclipse-plugin configuration <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-eclipse-plugin</artifactId> <version>2.8</version> <configuration> <projectNameTemplate>opensaga-[artifactId]</projectNameTemplate> <useProjectReferences>false</useProjectReferences> <downloadSources>false</downloadSources> <downloadJavadocs>false</downloadJavadocs> <wtpmanifest>true</wtpmanifest> <wtpversion>2.0</wtpversion> <wtpapplicationxml>true</wtpapplicationxml> <wtpContextName>opensaga-[artifactId]</wtpContextName> <additionalProjectFacets> <jst.java>5.0</jst.java> <jst.web>2.3</jst.web> </additionalProjectFacets> </configuration> </plugin>

    Read the article

  • WPF Control validation

    - by Jon
    Hi all, I'm developing a WPF GUI framework and have had bad experiences with two way binding and lots of un-needed events being fire(mainly in Flex) so I have gone down the route of having bindings (string that represent object paths) in my controls. When a view is requested to be displayed the controller loads the view, and gets the needed entities (using the bindings) from the DB and populates the controls with the correct values. This has a number of advantages i.e. lazy loading, default undo behaviour etc. When the data in the view needs to be saved the view is passed back to the controller again which basically does the reserve i.e. re-populates the entities from the view if values have changed. However, I have run into problems when I try and validate the components. Each entity has attributes on its properties that define validation rules which the controller can access easily and validate the data from the view against it. The actual validation of data is fine. The problem comes when I want the GUI control to display error validation information. It I try changing the style I get errors saying styles cannot be changed once in use. Is the a way in c# to fire off the normal WPF validation mechanism and just proved it with the validaiton errors the controller has found? Thanks in advance Jon

    Read the article

  • Is it possible to make a non-nullable column nullable when used in a view? (sql server)

    - by Matt
    Hi, To start off I have two tables, PersonNames and PersonNameVariations. When a name is searched, it finds the closest name to one of the ones available in PersonNames and records it in the PersonNameVariations table if it's not already in there. I am using a stored proc to search the PersonNames for a passed in PersonNameVariationand return the information on both the PersonName found and the PersonNameVariation that was compared to it. Since I am using the Entity Framework, I needed return a complex type in the Import Function but for some reason it says my current framework doesn't support it. My last option was to use an Entity to return in my stored proc instead. The result that I needed back is the information on both the PersonName that was found and the PersonNameVariation that was recorded. Since I cannot return both entities, I created a view PersonSearchVariationInfo and added it into my Entity Framework in order to use it as the entity to return. The problem is that the search will not always return a Person Name match. It needs to be able to return only the PersonNameVariation data in some cases, meaning that all the fields in the PersonSearchVariationInfo pertaining to PersonName need to be nullable. How can I take my view and make some of the fields nullable? When I do it directly in the Entity Framework I get a mapping error: Error 4 Error 3031: Problem in mapping fragments starting at line 1202:Non-nullable column myproject_vw_PersonSearchVariationInfo.DateAdded in table myproject_vw_PersonSearchVariationInfo is mapped to a nullable entity property. C:\Users\Administrator\Documents\Visual Studio 2010\Projects\MyProject\MyProject.Domain\EntityFramework\MyProjectDBEntities.edmx 1203 15 MyProject.Domain Anyone have any ideas? Thanks, Matt

    Read the article

  • Starter question of declarative style SQLAlchemy relation()

    - by jfding
    I am quite new to SQLAlchemy, or even database programming, maybe my question is too simple. Now I have two class/table: class User(Base): __tablename__ = 'users' id = Column(Integer, primary_key=True) name = Column(String(40)) ... class Computer(Base): __tablename__ = 'comps' id = Column(Integer, primary_key=True) buyer_id = Column(None, ForeignKey('users.id')) user_id = Column(None, ForeignKey('users.id')) buyer = relation(User, backref=backref('buys', order_by=id)) user = relation(User, backref=backref('usings', order_by=id)) Of course, it cannot run. This is the backtrace: File "/Library/Python/2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/orm/state.py", line 71, in initialize_instance fn(self, instance, args, kwargs) File "/Library/Python/2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/orm/mapper.py", line 1829, in _event_on_init instrumenting_mapper.compile() File "/Library/Python/2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/orm/mapper.py", line 687, in compile mapper._post_configure_properties() File "/Library/Python/2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/orm/mapper.py", line 716, in _post_configure_properties prop.init() File "/Library/Python/2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/orm/interfaces.py", line 408, in init self.do_init() File "/Library/Python/2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/orm/properties.py", line 716, in do_init self._determine_joins() File "/Library/Python/2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/orm/properties.py", line 806, in _determine_joins "many-to-many relation, 'secondaryjoin' is needed as well." % (self)) sqlalchemy.exc.ArgumentError: Could not determine join condition between parent/child tables on relation Package.maintainer. Specify a 'primaryjoin' expression. If this is a many-to-many relation, 'secondaryjoin' is needed as well. There's two foreign keys in class Computer, so the relation() callings cannot determine which one should be used. I think I must use extra arguments to specify it, right? And howto? Thanks

    Read the article

  • operator new for array of class without default constructor......

    - by skydoor
    For a class without default constructor, operator new and placement new can be used to declare an array of such class. When I read the code in More Effective C++, I found the code as below(I modified some part)..... My question is, why [] after the operator new is needed? I test it without it, it still works. Can any body explain that? class A { public: int i; A(int i):i(i) {} }; int main() { void *rawMemory = operator new[] (10 * sizeof(A)); // Why [] needed here? A *p = static_cast<A*>(rawMemory); for(int i = 0 ; i < 10 ; i++ ) { new(&p[i])A(i); } for(int i = 0 ; i < 10 ; i++ ) { cout<<p[i].i<<endl; } for(int i = 0 ; i < 10 ; i++ ) { p[i].~A(); } return 0; }

    Read the article

  • How to express inter project dependencies in Eclipse PDE

    - by Roland Tepp
    I am looking for the best practice of handling inter project dependencies between mixed project types where some of the projects are eclipse plug-in/OSGI bundle projects (an RCP application) and others are just plain old java projects (web services modules). Few of the eclipse plug-ins have dependencies on Java projects. My problem is that at least as far as I've looked, there is no way of cleanly expressing such a dependency in Eclipse PDE environment. I can have plug-in projects depend on other plug-in projects (via Import-Package or Require-Bundle manifest headers), but not of the plain java projects. I seem to be able to have project declare a dependency on a jar from another project in a workspace, but these jar files do not get picked up by neither export nor launch configuration (although, java code editing sees the libraries just fine). The "Java projects" are used for building services to be deployed on an J2EE container (JBoss 4.2.2 for the moment) and produce in some cases multiple jar's - one for deploying to the JBoss ear and another for use by client code (an RCP application). The way we've "solved" this problem for now is that we have 2 more external tools launcher configurations - one for building all the jar's and another for copying these jar's to the plug-in projects. This works (sort of), but the "whole build" and "copy jars" targets incur quite a large build step, bypassing the whole eclipse incremental build feature and by copying the jars instead of just referencing the projects I am decoupling the dependency information and requesting quite a massive workspace refresh that eats up the development time like it was candy. What I would like to have is a much more "natural" workspace setup that would manage dependencies between projects and request incremental rebuilds only as they are needed, be able to use client code from service libraries in an RCP application plug-ins and be able to launch the RCP application with all the necessary classes where they are needed. So can I have my cake and eat it too ;) NOTE To be clear, this is not so much about dependency management and module management at the moment as it is about Eclipse PDE configuration. I am well aware of products like [Maven], [Ivy] and [Buckminster] and they solve a quite different problem (once I've solved the workspace configuration issue, these products can actually come in handy for materializing the workspace and building the product)

    Read the article

  • Facebook Graph API authentication in canvas app and track session

    - by cdpnet
    Short question is: how can i use graph api oauth redirects mechanism to authenticate user and save retrieved access_token and also use javascript SDK when needed (the problem is javascript SDK will have different access_token when initialized). I have initially setup my facebook iframe canvas app, with single sign on. This works well with graph api, as I am able to use access_token saved by facebook's javascript when it detects sessionchange(user logged in). But, I want to rather not do single sign-on. But, use graph api redirect and force user to send to a permissions dialog. But, if he has already given permissions, I shouldn't redirect user. How to handle this? Another question: I have done graph api redirects for authentication and have retrieved access_token also. But then, what if I want to use javascript call FB.ui to do stream.Publish? I think it will use it's own access_token which is set during FB.init and detecting session. So, I am looking for some path here. How to use graph api for authentication and also use facebook's javascript SDK when needed. P.S. I'm using ASP .NET MVC 2. I have an authentication filter developed, which needs to detect the user's authentication state and redirect.(currently it does this to graph api authorize url)

    Read the article

  • How to permanently remove xcuserdata under the project.xcworkspace and resolve uncommitted changes

    - by JeffB6688
    I am struggling with a problem with a merge conflict (see Cannot Merge due to conflict with UserInterfaceState.xcuserstate). Based on feedback, I needed to remove the UserInterfaceState.xcuserstate using git rm. After considerable experimentation, I was able to remove the file with "git rm -rf project.xcworkspace/xcuserdata". So while I was on the branch I was working on, it almost immediately came back as a file that needed to be committed. So I did the git rm on the file again and just switched back to the master. Then I performed a git rm on the file again. The operation again removed the file. But I am still stuck. If I try to merge the branch into the master branch, it again says that I have uncommitted changes. So I go to commit the change. But this time, it shows UserInterfaceState.xcuserstate as the file to commit, but the box is unchecked and it can't be checked. So I can't move forward. Is there a way to use 'git rm' to permanently remove xcuserdata under the project.xcworkspace? Help!! Any ideas?

    Read the article

  • Unit tests for deep cloning

    - by Will Dean
    Let's say I have a complex .NET class, with lots of arrays and other class object members. I need to be able to generate a deep clone of this object - so I write a Clone() method, and implement it with a simple BinaryFormatter serialize/deserialize - or perhaps I do the deep clone using some other technique which is more error prone and I'd like to make sure is tested. OK, so now (ok, I should have done it first) I'd like write tests which cover the cloning. All the members of the class are private, and my architecture is so good (!) that I haven't needed to write hundreds of public properties or other accessors. The class isn't IComparable or IEquatable, because that's not needed by the application. My unit tests are in a separate assembly to the production code. What approaches do people take to testing that the cloned object is a good copy? Do you write (or rewrite once you discover the need for the clone) all your unit tests for the class so that they can be invoked with either a 'virgin' object or with a clone of it? How would you test if part of the cloning wasn't deep enough - as this is just the kind of problem which can give hideous-to-find bugs later?

    Read the article

  • Proper way to use a config file?

    - by user156814
    I just started using a PHP framework, Kohana (V2.3.4) and I am trying to set up a config file for each of my controllers. I never used a framework before, so obviously Kohana is new to me. I was wondering how I should set up my controllers to read my config file. For example, I have an article controller and a config file for that controller. I have 3 ways of loading config settings // config/article.php $config = array( 'display_limit' => 25, // limit of articles to list 'comment_display_limit' => 20, // limit of comments to list for each article // other things ); Should I A) Load everything into an array of settings // set a config array class article_controller extends controller{ public $config = array(); function __construct(){ $this->config = Kohana::config('article'); } } B) Load and set each setting as its own property // set each config as a property class article_controller extends controller{ public $display_limit; public $comment_display_limit; function __construct(){ $config = Kohana::config('article'); foreach ($config as $key => $value){ $this->$key = $value; } } } C) Load each setting only when needed // load config settings only when needed class article_controller extends controller{ function __construct(){} // list all articles function show_all(){ $display_limit = Kohana:;config('article.display_limit'); } // list article, with all comments function show($id = 0){ $comment_display)limit = Kohana:;config('article.comment_display_limit'); } } Note: Kohana::config() returns an array of items. Thanks

    Read the article

  • How to make Stack.Pop threadsafe

    - by user260197
    I am using the BlockingQueue code posted in this question, but realized I needed to use a Stack instead of a Queue given how my program runs. I converted it to use a Stack and renamed the class as needed. For performance I removed locking in Push, since my producer code is single threaded. My problem is how can thread working on the (now) thread safe Stack know when it is empty. Even if I add another thread safe wrapper around Count that locks on the underlying collection like Push and Pop do, I still run into the race condition that access Count and then Pop are not atomic. Possible solutions as I see them (which is preferred and am I missing any that would work better?): Consumer threads catch the InvalidOperationException thrown by Pop(). Pop() return a nullptr when _stack-Count == 0, however C++-CLI does not have the default() operator ala C#. Pop() returns a boolean and uses an output parameter to return the popped element. Here is the code I am using right now: generic <typename T> public ref class ThreadSafeStack { public: ThreadSafeStack() { _stack = gcnew Collections::Generic::Stack<T>(); } public: void Push(T element) { _stack->Push(element); } T Pop(void) { System::Threading::Monitor::Enter(_stack); try { return _stack->Pop(); } finally { System::Threading::Monitor::Exit(_stack); } } public: property int Count { int get(void) { System::Threading::Monitor::Enter(_stack); try { return _stack->Count; } finally { System::Threading::Monitor::Exit(_stack); } } } private: Collections::Generic::Stack<T> ^_stack; };

    Read the article

  • Very strange jQuery / AJAX behavior

    - by Dr. DOT
    I have an Ajax call to the server that only works when I pass an alert(); to it. Cannot figure out what is wrong. Can anyone help? This Does Not Work (ie., Ajax call to server does not get made): <!-- jQuery.support.cors = true; // needed for ajax to work in certain older browsers and versions $('input[name="status"]').on("change", function() { if ($('input:radio[name="status"]:checked').val() == 'Y') { $.ajax({ url: 'http://mydomain.com/dir/myPHPscript.php?param=' + $('#param').val() + '&id=' + ( $('#id').val() * 1 ) + '&mode=' + $('#mode').val() }); } window.parent.closePP(); window.top.location.href = $('#redirect').val(); // reloads page }); //--> This Works! (ie., Ajax call to server gets made when I have the alert() present): <!-- jQuery.support.cors = true; // needed for ajax to work in certain older browsers and versions $('input[name="status"]').on("change", function() { if ($('input:radio[name="status"]:checked').val() == 'Y') { $.ajax({ url: 'http://mydomain.com/dir/myPHPscript.php?param=' + $('#param').val() + '&id=' + ( $('#id').val() * 1 ) + '&mode=' + $('#mode').val() }); **alert('this makes it work');** } window.parent.closePP(); window.top.location.href = $('#redirect').val(); // reloads page }); //--> Thanks.

    Read the article

  • Windows update breaks dlls?

    - by shoosh
    I'm compiling a project which uses multiple DLL and compiles with VS2008. After a recent windows update DLLs compiled on my computer stopped working on other computers. After some investigation it turned out that it updated the CRT redistributable library which I'm compiling with from version "9.0.21022.8" to version "9.0.30729.4148" This is evident from the Manifest file of the EXE i'm compiling. it contains the following: <dependency> <dependentAssembly> <assemblyIdentity type="win32" name="Microsoft.VC90.CRT" version="9.0.21022.8" processorArchitecture="amd64" publicKeyToken="1fc8b3b9a1e18e3b"></assemblyIdentity> </dependentAssembly> </dependency> <dependency> <dependentAssembly> <assemblyIdentity type="win32" name="Microsoft.VC90.CRT" version="9.0.30729.4148" processorArchitecture="amd64" publicKeyToken="1fc8b3b9a1e18e3b"></assemblyIdentity> </dependentAssembly> </dependency> Meaning it wants to use two different versions of the CRT at the same time. the second version is needed by the code which I'm compiling right now and the first version is needed by older dlls which were compiled a few weeks ago. In the computers where the application is deployed this becomes a problem since they get their CRT dll from a local folder called Microsoft.VC90.CRT and not from WinSXS. This folder can't contain two different versions of the dll. Is there a known solution to this issue or do I need to start compiling all of the other DLLs with the new CRT?

    Read the article

  • iPhone Debugging: How to resolve 'failed to get the task for process'?

    - by unforgiven
    I have just added a provisioning profile to XCode (needed to support notifications and in app purchase), setup as needed the build configuration for ad hoc distribution, and tried to run the app on the device (I have done this several times in the past, without any problem). The app is installed, but it does not start. On the console, I see the following message: Error launching remote program: failed to get the task for process 82. Error launching remote program: failed to get the task for process 82. The program being debugged is not being run. The program being debugged is not being run. However, if I start the application on the device manually, it works as expected. I have recently installed the latest XCode 3.2 for Snow Leopard. Is this a known bug of this version of XCode or am I doing something wrong? EDIT: It works fine with release distribution using the development provisioning profile. I have checked again the ad hoc provisioning profile to make sure it includes the device I am using.

    Read the article

< Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >