Search Results

Search found 38522 results on 1541 pages for 'single source'.

Page 391/1541 | < Previous Page | 387 388 389 390 391 392 393 394 395 396 397 398  | Next Page >

  • recommendation for configuration for a multi-core guestOS

    - by reidLinden
    Hi there, I've just received an upgraded Host machine, and am looking to push some of those advances to my workstations Guest OS(s). In particular, I used to have a single processor, with 2 cores, so my guestOS only had 1/1. Now, I've got a single processor with 8 cores, so I'm curious about what would be recommended for my GuestOS now? 1 processor/4 cores? 2 processors/2Cores? 4 processors/1 core? My instinct says to stick with the number of physical processors (or less), but, is that based on reality? I spent a good while looking for an answer to this, but perhaps my google-karma isn't in my favor today. Suggestions?

    Read the article

  • What is the R Language?

    - by TATWORTH
    I encountered the R Language recently with O'Reilly books and while from the context I knew it was a language for dealing with statistics, doing a web search for the support web site was futile. However I have now located the web site and it is at http://www.r-project.org/R is a free language available for a number of platforms including windows. CRAN mirrors are available at a number of locations worldwide.Here is the official description:"R is a language and environment for statistical computing and graphics. It is a GNU project which is similar to the S language and environment which was developed at Bell Laboratories (formerly AT&T, now Lucent Technologies) by John Chambers and colleagues. R can be considered as a different implementation of S. There are some important differences, but much code written for S runs unaltered under R. R provides a wide variety of statistical (linear and nonlinear modelling, classical statistical tests, time-series analysis, classification, clustering, ...) and graphical techniques, and is highly extensible. The S language is often the vehicle of choice for research in statistical methodology, and R provides an Open Source route to participation in that activity. One of R's strengths is the ease with which well-designed publication-quality plots can be produced, including mathematical symbols and formulae where needed. Great care has been taken over the defaults for the minor design choices in graphics, but the user retains full control. R is available as Free Software under the terms of the Free Software Foundation's GNU General Public License in source code form. It compiles and runs on a wide variety of UNIX platforms and similar systems (including FreeBSD and Linux), Windows and MacOS."

    Read the article

  • Event handler generation in Visual Studio 2012

    - by Jalpesh P. Vadgama
    This post will be a part of Visual Studio 2012 feature series There are lots of new features there in visual studio 2012. Event handler generation is one of them. In earlier version of visual studio there was no way to create event handler from source view directly.  Now visual studio 2012 have event handler generation functionality. So if you are editing an event view in source view intellisense will display add new event handler template and once you click on it. It will create a new event handler in the cs file. It will also put a eventhandler name against event name so you don’t need to write that. So, let’s take a simple example of button click event so once I write onclick attribute their smart intellisense will pop up . Now once you click on <Create New Event> It will create event handler in .cs file like following. It will also put submitButton_Click on onClick attribute. Hope you liked it. Stay tuned for more. Till then happy programming..

    Read the article

  • implementing dynamic query handler on historical data

    - by user2390183
    EDIT : Refined question to focus on the core issue Context: I have historical data about property (house) sales collected from various sources in a centralized/cloud data source (assume info collection is handled by a third party) Planning to develop an application to query and retrieve data from this centralized data source Example Queries: Simple : for given XYZ post code, what is average house price for 3 bed room house? Complex: What is estimated price for an house at "DD,Some Street,XYZ Post Code" (worked out from average values of historic data filtered by various characteristics of the house: house post code, no of bed rooms, total area, and other deeper insights like house building type, year of built, features)? In addition to average price, the application should support other property info ** maximum, or minimum price..etc and trend (graph) on a selected property attribute over a period of time**. Hence, the queries should not enforce the search based on a primary key or few fixed fields In other words, queries can be What is the change in 3 Bed Room house price (irrespective of location) over last 30 days? What kind of properties we can get for X price (irrespective of location or house type) The challenge I have is identifying the domain (BI/ Data Analytical or DB Design or DB Query Interface or DW related or something else) this problem (dynamic query on historic data) belong to, so that I can do further exploration My findings so far I could be wrong on the following, so please correct me if you think so I briefly read about BI/Data Analytics - I think it is heavy weight solution for my problem and has scalability issues. DB Design - As I understand RDBMS works well if you know Data model at design time. I am expecting attributes about property or other entity (user) that am going to bring in, would evolve quickly. hence maintenance would be an issue. As I am going to have multiple users executing query at same time, performance would be a bottleneck Other options like Graph DB (http://www.tinkerpop.com/) seems to be bit complex (they are good. but using those tools meant for generic purpose, make me think like assembly programming to solve my problem ) BigData related solution are to analyse data from multiple unrelated domains So, Any suggestion on the space this problem fit in ? (Especially if you have design/implementation experience of back-end for property listing or similar portals)

    Read the article

  • SQL Server store procedure encrypt is safe?

    - by George2
    I am using SQL Server 2008 Enterprise on Windows Server 2003 Enterprise. I developed some store procedure for SQL Server and the machine installed with SQL Server may not be fully under my control (may be used by un-trusted 3rd party). I want to protect my store procedure T-SQL source code (i.e. not viewable by some other party) by using encrypt store procedure function provided by SQL Server. I am not sure whether encrypt store procedure is 100% safe and whether the administrator of the machine (installed with SQL Server) still have ways to view store procedure's source codes? thanks in advance, George

    Read the article

  • How did we get saddled with the (hierarchical) filesystem as the basic data structure?

    - by user1936
    I'm self-taught and I don't have a CS degree. The more I've been learning about data structure, the more I wonder, in this day and age, how are we still saddled with the filesystem, with directories and files, as the basic data storage structure on the OS? I understand the simplicity of it, but it seems nowadays that there could be more options available natively. As far as I'm aware, the only project to improve the basic functionality of the filesystem was ReiserFS, where you could tell what line of a file was changed by whom, and when. For instance, if I could have native tagging for files, where I could tag images, diagrams, word-processing documents, an entire code repository, all as belonging to a single project, that would really be helpful to me. Since I'm stuck in the filesystem paradigm, I know that I could put all those into a single folder/directory, but what if they already exist in disparate directories, and they need to stay there? I know there are programs out there that can do this, but why aren't they on the filesystem? Something that would be nice to have is some kind of relational feature in the filesystem, like you get with RDBMSes. I understand that that was supposed to be part of Vista/7, but that fell off the feature list too. Sure, any program can store a binary file and have any data structure it wants in it, by why couldn't the OS offer more complex ways of storing data, beyond the simple heirarchy of the filesystem?

    Read the article

  • Multiple WAPs: Bandwidth, Frequency Considerations

    - by Pete Cresswell
    The router in my LAN closet does 2 and 5 GHz. In the kitchen, I have a single-band 2 GHz WAP, and in the garden shed I have another single-band 2 GHz WAP. All are set to Bandwidth = 40 MHz, Wireless Network Mode = N-Only. The kitchen WAP and the LAN closet router both come up with multiple bars on my smart phone from almost anywhere in the house. The garden shed WAP will register one bar... but only sometimes. The Questions: Are these things in danger of butting heads? Should I re-set them to Bandwidth = 20 MHz? Bandwidth = Auto? Are there any tools that I could use on an Android smart phone, iPod, or WiFi-enabled laptop to make my own analysis?

    Read the article

  • My windows keyboard is being "clever" with the quote keys - how can I stop it?

    - by Marcin
    I'm using windows 7 on a laptop. On the laptop keyboard, for some reason, the quote key (which has both double and single quote on it) is doing some "clever" annoying things: When I press single-quote (or double-quote), windows doesn't send any characters until I press it twice (resulting in '' or "") When I press it before a vowel, I get some kind of accented character. As I usually only write English, this is annoying. The backtick/tilde key is subject to similar behaviour. I have not attempted to set up my computer to process anything other than English. My keyboard appears to be (in so far as these things are standard on laptops) a standard US qwerty keyboard. How can I stop this happening?

    Read the article

  • Common filesystem for servers behind a rackspace load balancer

    - by thanos panousis
    Our PHP application consists of a single web server that will receive files from clients and perform a CPU-intensive analysis on them. Right now, analysis of a single user upload can take 3sec to conclude and take 100% CPU. This makes our system capacity amount to 1/3 requests per second. My team's requirement is to increase capacity without a lot of code reengineering. A possible solution would be to set up a load balancer in front of multiple servers running the same app, connecting to a common DB. The problem is that the analysis outputs files on disk. A load balancer would increase capacity, but then files won't be available between servers so consequent client requests may fail. We are hosted on Rackspace, is there a way to configure some sort of "common" storage for all servers, without having to rewrite our file persistance code? Current code relies on simple fopens etc. What are our options?

    Read the article

  • Life Cycle Navigator?

    - by C.W.Holeman II
    In many environments the file system directory structure and naming conventions attempt to allow one to use a file manager to navigate the life cycle of a document. This overloading of functions makes it difficult for users to handle the complexity. A file browser is a tool that lets the user navigate among files located in a directory structure to find a specific file. Whereas, when given a specific file, a life cycle navigator is a tool that lets the user navigate its life cycle from source to published copy and across versions. Does a Life Cycle Navigator exit? I see a user pointing at an object: Left mouse button displays the document Right mouse button has a Life Cycle Navigator (LCN) The LCN displays a tree for a specific document within a file manger, for example: Published 3.2 Current 3.1 3.0 +2.x +1.x +Archived +All Source Draft 3.2 Current 3.1 3.0 +2.x +1.x +Archived +All +Work Flow +Properties Or from a command line: $ lcn x.pdf --open_source_document | my_favorite_editor $ lcn x.pdf --show_published_version_info $ lcn x.pdf --show_previous_publish_versions_info See also, Life Cycle Navigator.

    Read the article

  • Setting up a Google Analytics Campaign

    - by Ashfame
    I will be doing a bunch of things to give one of my projects (main app) a big initial push for which I will be building a few small Facebook apps which will help in promoting the main apps. Traffic from these apps need to be tracked individually. My main app will be posting on the walls when the user needs to be notified. Traffic from these posts need to be tracked. Traffic from emails sent by the main app need to be tracked, like different types of email. I need to track all of these & possibly a couple of more but I need to be sure that I build my campaign URLs correctly as I won't get another chance to fix it. Correct me where I am wrong: Campaign Name: Launch Campaign Medium: Email Campaign Source: Type1 or Type2 (I can break it down for different types of email, right?) For apps: Campaign Name: Launch Campaign Medium: Apps Campaign Source: App1 or App2 (I can break it down here for different apps, right?) What if I want to track two different links within a single email or a single app? Any way of tracking them individually too but still keeping to track them as one because tracking them as one makes more sense for me. Campaign Term & Campaign content is irrelevant in my case, or I can/should use them for something? And I will also be tracking traffic of different apps. Should I do more? Let me know if my scenario wasn't clear enough & I need to explain more.

    Read the article

  • Debian/Ubuntu apt or pbuilder without root privileges?

    - by Tem Pora
    I want to use apt or pbuilder to build a package in user's home directory. The home directory has enough space to hold the package's source, its dependencies and binary output. But the apt and pbuilder documents say that you have to be a root user (sudo) to use it. It's frustrating, as the only way now I have at my disposal is to build the package from source or use the dumba$$ (sorry for bad language) dpkg and in both cases figure out every dependency manually, create the dir layout manually and install the built things manually. Now if I can do all these things manually, why the tool writers (apt) think that doing so using their tool (apt) is somehow more special/dangerous? I don't want to use root privileges JUST to build and test a user-land package. If I am NOT allowed to do anything outside my home dir then why NOT the apt or pbuilder type commands be allowed to "build" something in my home dir without root privileges? I just want to use their functionality. It seems there is nothing like Gentoo Prefix from Debian

    Read the article

  • Dead-simple USB-based Windows partition cloner?

    - by OverTheRainbow
    Clonezilla is a fine open-source tool, but it requires going through several screens. Since I need to save/restore the same Windows partition, I was wondering if someone knew of a tool (open-source or not) that is easier to use and boots off a USB keydrive. Ideally, it'll save the two commands to save/restore a partition, so I just need to boot the host from the USB key, choose the command, and it'll take care of business. Are there solutions that look like this? Thank you. Edit: Here's one among other articles that shows how to tell CZ to run a script to avoid the multiple screens.

    Read the article

  • LDAP encrypt attribute that extends userpassword

    - by Foezjie
    In my current LDAP schema I have an objectclass (let's call it group) that has 2 attributes that extend userpassword. Like this: attributeType ( groupAttributes:12 NAME 'groupPassword1' SUP userPassword SINGLE-VALUE ) attributeType ( groupAttributes:13 NAME 'groupPassword2' SUP userPassword SINGLE-VALUE ) group extends organisation so already has a userpassword attribute. If I use that to enter a new group using PHPLDAPAdmin it uses SSHA (by default) and encrypts/hashes the password I entered. But the passwords I entered for groupPassword1 en groupPassword2 don't get encrypted. Is there a way to make it so that those attributes are encrypted too?

    Read the article

  • Centrally managing 100+ websites without bankrupting a small company

    - by palintropos
    I'm mainly interested in opinions on the trade-offs between having a single central server all the websites connect to as opposed to each website mirroring a subset of the master database with all the products in it. For example, will I run into severe performance issues (or even security issues, or restrictions) making queries to an offsite database? Will we hit scalability issues we can't handle early on from the sheer bandwidth required to maintain this? If we do go with something like a script that keeps smaller databases (each containing a subset of the central master data) in sync, what sorts of issues will we likely encounter there? I would really like the opinions of people far more knowledgeable than I am regarding the pros and cons of both setups and what headaches we are likely to encounter. CLARIFICATION: This should not be viewed as a question about whether we should implement one database vs multiple databases. This question has been answered numerous times. The question is regarding the pros and cons for a deployment like this having the ability to manage all the websites centrally (one server) vs trying to keep them all in sync if they each have their own db (multiple servers). REAL-WORLD EXAMPLE: We are a t-shirt company, and we have individual websites for our different kinds of t-shirts, but we're looking at a central order management integrated with our single shopping cart (which is ColdFusion + MySQL). Now, let's say we have a t-shirt that's on 10 of our websites and we change an image for it. Ideally we would change that in one place and the change would propagate, but how would we set this up?

    Read the article

  • Using a CDN for CMS software (multiple sites)

    - by SmokeyPHP
    I'm currently researching ideas for the media management side of a CMS I'm writing. I was looking at having images served from a CDN which is fine on a single site, but I want all sites that run the CMS to make use of a CDN (which will most likely be a custom developed one, rather than a third party service like S3). My main question is: Is a multi-site CDN a good idea? I can't think of a downside, but have probably missed something - obviously they won't share the same folder, as I invisage the requests to be css.cdnsite.com/example.com/style.css or something along those lines. Having multiple sites in the same place will obviously make it easier for us to manage, as well as being cheaper, but then I wonder if it'll be worth it... Long story short: How should the CMS handle user uploaded media (separate installations) Just keep a local copy of all assets and serve them from the same site, like in days of yore? Keep a local copy, force site to use www. and have CDN subdomains per site? Or use a single separate CDN for all sites? Apologies for the length of this question, not sure if this should be multiple questions or not, as all parts are kind of related and could affect each other.

    Read the article

  • LINQ to Twitter Maintenance Feedback

    - by Joe Mayo
    Originally posted on: http://geekswithblogs.net/WinAZ/archive/2013/06/16/linq-to-twitter-maintenance-feedback.aspxIt’s always fun to receive positive feedback on your work. If you receive a sufficient amount of positive feedback, you know you’re doing something right. Sometimes, people provide negative feedback too. There are a couple ways to handle it: come back fighting or engage for clarification. The way you handle the negative feedback depends on what your goals are. Feedback Approaches If you know the feedback is incorrect and you need to promote your idea or product, you might want to come back fighting. The feedback might just be comments by a troll or competitor wanting to spread FUD. However, this could be the totally wrong approach if you misjudge the source and intentions of the feedback. In a lot of cases, feedback is a golden opportunity. Sometimes, a problem exists that you either don’t know about or don’t realize the true impact of the problem. If you decide to come back fighting, you might loose the opportunity to learn something new. However, if you engage the person providing the feedback, looking for clarification, you might learn something very important. Negative feedback and it’s clarification can lead to the collection of useful and actionable data. In my case, something that prompted this blog post, I noticed someone who tweeted a negative comment about LINQ to Twitter. Normally, any less than stellar comments are usually from folks that need help – so I help if I can. This was different. I was like “Don’t use LINQ to Twitter”. This is an open source project, the comment didn’t come from a competing project, and  sounded more like an expression of frustration. So I engaged. Not only did the person respond, but I got some decent quality feedback. What’s also interesting is a couple other side conversations sprouted on the subject, which gave me more useful data. LINQ to Twitter Thread Actions Essentially, this particular issue centered around maintenance. There are actually several sub-issues at play here: dependencies, error handling, debugging, and visibility. I’ll describe each one and my interpretation. Dependencies Dependencies are where a library has references to other libraries. This means that when you build your application, you need DLLs for the entire dependency graph for your application. There are several potential problems with this that include more libraries for configuration management, potential versioning mismatches, and lack of cross-platform support. In the early days of LINQ to Twitter, I allowed developers to contribute and add dependencies, but it became very problematic (for reasons stated). It was like a ball and chain that kept me from moving forward. So, I refactored and pulled other open-source into my project to eliminate external dependencies. This lets me fix the code in my project without relying on someone else to upgrade or fix their DLL. The motivation for this was from early negative feedback that translated as important data and acted on it. Today, LINQ to Twitter has zero dependencies. Note: Rejecting good code from community members who worked hard to make your project better is a painful experience in itself. I have to point out that any contribution was not in vain because they had a positive influence on my subsequent refactoring that resulted in a better developer experience. Error Handling Error handling has been a problem in the past. I have this combination of supporting both synchronous and asynchronous (APM) processing that can be complex at times. Within the last 6 months, I did a fair amount of refactoring to detect errors and process them properly. I also refactored TwitterQueryException so it includes important data from Twitter. During this refactoring, I’ve made breaking changes that I felt would improve the development experience (small things like renaming a callback property to Exception, rather than Error). I think the async error handling is much better than it was a year ago. For all the work I’ve done, there is more to do. I think that a combination of more error handling support, e.g. improving semantics, and education through documentation and samples will improve the error handling story. Because of what I’ve done so far, it isn’t bad, but I see opportunities for improvement. Debugging Debugging can be painful. Here’s why: you have multiple layers of technology to navigate and figure out where the real problem is – Twitter API, Security, HTTP, LINQ to Twitter, and application. You can probably add your own nuances to that list, but the point is that debugging in this environment can be complex. I think that my plans for error handling will contribute to making the debugging process easier. However, there’s more I can do in the way of documentation and guidance. Some of the questions to be answered revolve around when something goes wrong, how does the developer figure out that there is a problem, what the problem is, and what to do about it. One example that has gone a long way to helping LINQ to Twitter developers is the 401 FAQ. A 401 Unauthorized is the error that the Twitter API returns when a use isn’t able to authenticate and is one of the most difficult problems faced by LINQ to Twitter developers. What I did was read guidance from Twitter and collect techniques from my own development and actions helping other developers to compile an extensive list of reasons for the 401 and ways to fix the problem. At one time, over half of the questions I answered in the forums were to help solve 401 issues. After publishing the 401 FAQ, I rarely get a 401 question and it’s because the person didn’t know about the FAQ. If the person is too lazy to read the FAQ, that’s not my issue, but the results in support issues have been dramatic. I think debugging can benefit from the education and documentation approach, but I’m always open to suggestions on whatever else I can do. Visibility Visibility is a nuance of the error handling/debugging discussion but is deeply rooted in comfort and control. The questions to ask in this area are what is happening as my code runs and how testable is the code. In support of these areas, LINQ to Twitter does have logging and TwitterContext properties that help see what’s happening on requests. The logging functionality allows any developer to connect a TextWriter to the Log property of TwitterContext to see what’s happening. Further, TwitterContext has a Headers property to see the headers Twitter returns and a RawResults property to show the Json string Twitter returns. From a testing perspective, I’ve been able to write hundreds of unit tests, over 600 when this post is published, and growing. If you write your own library, you have full control over all of these aspects. The tradeoff here is that while you have access to the LINQ to Twitter source code and modify it for all the visibility, LINQ to Twitter *will* change (which is good) and you will have to figure out how to merge that with your changes (which is hard). The fact is that this is a limitation of any 3rd party library, not just LINQ to Twitter. So, it’s a design decision where the tradeoff is between control and productivity. That said, there are things I can do with LINQ to Twitter to make the visibility story more compelling. I think there are opportunities to improve diagnostics. This would be a ton of work because it would need to provide multi-level logging that can be tuned for production and support any logging provider you want to attach. I’ve considered approaches such as how the new Semantic Logging application block connects to Windows Error Reporting as a potential target. Whatever I do would need to be extensible without creating native external dependencies. e.g. how many 3rd party libraries force a dependency on a logging framework that you don’t use. So, this won’t be an easy feat, but I believe it can be part of the roadmap. I think that a lot of developers are unaware of existing visibility features, so the first step would be to provide more documentation and guidance. My thought are that this would lead to more feedback that will help improve this area. Summary Recent feedback highlights some of items that are important to LINQ to Twitter developers, such as dependencies, error handling, debugging, and visibility. I know that there are maintenance issues that have been problems for LINQ to Twitter developers in the past. I’ve done a lot of work in this area, such as improving error handling, adding visibility features, and providing extensive API documentation. That said, there is more to be done to make LINQ to Twitter the best Twitter API experience available for .NET developers and I welcome anyone’s thoughts on what I’ve written here or new improvements. @JoeMayo

    Read the article

  • Software to replicate one computers display onto many other displays

    - by Joe Taylor
    We have a classroom setup with one teachers pc at the front. I am looking for some software, preferably open source although this is not a deal breaker, to force all displays in the room to replicate the teachers display. Also if this software could be locked so the students could not exit this software while it was running. Does anyone know of any software that could perform this task? I have googled around for a solution but haven't found anything suitable as yet. It would be running on Windows 7 Flavours of the software I have found are: Lanschool and NetOp. Open source alternatives would be better.

    Read the article

  • Installing GPSBabel on CentOS 5 x86_64

    - by Clint Chaney
    Well first let me say I have no clue about doing anything on my server, I ask my host to do all installs for me. I run a website where users store latitude and longitude coordinates in my database. I would like them to be able to download these waypoints to their gps units. I found a program called GPSBabel that allows this to be done. http://www.gpsbabel.org/ I want to be able to control GPSBabel from PHP using exec() or something along those lines. The problem is that the linux version of the program is a source file and they don't want to build or install it without some source of instructions. Does anyone have experience with installing this? Perhaps know someone that has and that can lead me in the right direction? Any help would be hugely appreciated. I'm pretty much stuck without getting this to work.

    Read the article

  • Multi Threading - How to split the tasks

    - by Motig
    if I have a game engine with the basic 'game engine' components, what is the best way to 'split' the tasks with a multi-threaded approach? Assuming I have the standard components of: Rendering Physics Scripts Networking And a quad-core, I see two ways of multi-threading: Option A ('Vertical'): Using this approach I can allow one core for each component of the engine; e.g. one core for the Rendering task, one for the Physics, etc. Advantages: I do not need to worry about thread-safety within each component I can take advantage of special optimizations provided for single-threaded access (e.g. DirectX offers a flag that can be set to tell it that you will only use single-threading) Option B ('Horizontal'): Using this approach, each task may be split up into 1 <= n <= numCores threads, and executed simultaneously, one after the other. Advantages: Allows for work-sharing, i.e. each thread can take over work still remaining as the others are still processing I can take advantage of libraries that are designed for multi-threading (i.e. ... DirectX) I think, in retrospect, I would pick Option B, but I wanted to hear you guys' thoughts on the matter.

    Read the article

  • List all BPM Processes for a user

    - by kasriniv
    Hello, Happy to start contributing to this blog..  The title of the blog is probably deceptively simple and warrants an elaboration. Customized BPM workspaces/user interfaces are a fairly common requirement. One of our marquee customers in the online stock trading business, envisioned this user interaction for their BPM application: User logs in to the internal portal Use will have list of roles which he is granted as a drop down list Once user selects the role, a list of processes which user is part of appear. Logged in user can be part of any swimlane role of the process This can be a fairly common/reasonable user-UI interaction pattern. 1. and 2. are easily achievable and hence the subject matter of this blog is the requirement in 3. Objective: Given a username and a role, list all the BPM processes that the user is part of, in any swimlane of any process. Here is quick overview of the major steps/logic in the code: Intialize workflow/BPM  context as usual Get a handle on InstanceQueryService(getInstanceQueryService), InstanceManagementService,        ProcessMetadataService and ProcessModelService List all Processes for that bpmcontext (listProcessMetadataSumary) and get Granted roles to that user For each of the processes [method  getAccessibleProcesss(ProcessMetadataSummary, Set)]for each of the lanes in the process, check if the role granted to the user, matches the roleName for that swimlane. If so, add to output. Notes: The usual caveats apply including BPM APIs are subject to change.  JDeveloper method introspection is your better friend than API documentation :-)... (I am going to try upload the source code  and if it doesnt work, will follow this blog up with the corresponding source code.) Hope this helps.  Ack: Yogesh K, BPM Dev team.

    Read the article

  • Why can't I compile this version of Postfix?

    - by Coofucoo
    I just installed postfix 2.7.11 in Ubuntu server from source code. I do not use the ubuntu own one because I need the old version. I found a very interesting problem. Before, in both CentOS 5 and 6, I can build the source code without any problem. But, in Ubuntu server 12.04 is totally different. I got the following problems: dict_nis.c:173: error: undefined reference to 'yp_match' dict_nis.c:187: error: undefined reference to 'yp_match' dns_lookup.c:347: error: undefined reference to '__dn_expand' dns_lookup.c:218: error: undefined reference to '__res_search' dns_lookup.c:287: error: undefined reference to '__dn_expand' dns_lookup.c:498: error: undefined reference to '__dn_expand' dns_lookup.c:383: error: undefined reference to '__dn_expand' Yes, this reason is obviously. I just search related library and add it to the makefile. It works. The question is why? What is the difference between Ubuntu Server and CentOS? One possibility is gcc and ld version. Ubuntu server use different version of gcc and ld with CentOS. But I am not sure.

    Read the article

  • Complex string matching with fuzzywuzzy

    - by That1Guy
    I'm attempting to write a process that matches obscure strings to a single 'master string' for further processing. I have a lot of data that looks something like this: Basketball Basket Ball Football BasketBallR BBall BBall - r FootB ...and so on. These need to be mapped to a master record like so: Basketball = Basket Ball, BBall Basketball - R = BasketBallR, BBall - r I also have instances of data resembling this format: Football -r FootBall - r-g/H,Q,HH These situations need to be separated into different categories before being mapped. For example FootBall - r-g/H,Q,HH should be: Football - r Football - g Football - H Football - Q Football - HH At this point, it still needs to be mapped to a master record... I've tried several different combinations of fuzzywuzzy matching methods, Levenshtein Distance measurements, regex, etc. and can't seem to find a reliable method to logically associate different naming styles of a single item with a master name. I'm throwing my hands up in desperation. Are there any existing python resources than can help sort out my problem? Are there other options? Can anybody point out an obvious option that I might have overlooked? Basically, any suggestion, solution, resource or alternative method is greatly appreciated.

    Read the article

  • PHP + IIS7 + X64 OS (Windows 7 or Server 2008)

    - by Eric
    I'm going to answer my own question here, but I thought this might be important enough to post so that it would be indexed for the next person who runs into my situation. Problem: I can not seem to get PHP code to execute on a x64 bit version ofIIS7, whether it be in my desktop, Windows 7, or the application's final destination on Windows Server 2008. Every time I try and look at a test php document to confirm installation, I only see the source code. I've followed the documentation from PHP, from iis.net, blogs, howtos, just about anywhere I can find that Google would send me. I tried the web installer, tried manual installations instead of the MSI, tried version 5.3.5, tried version 5.2.17, but no matter what, the code would never execute. I even tried registering .eric files with PHP FastCGI Module, but same result, php source code only.

    Read the article

  • Which method of SQL Server 2005 or 2008 Replication is best for ease of field changes?

    - by Rick
    We need 15 minute warm updates from one SQL Server to another. Log Shipping looks good and appears easy to setup. We are also looking into Transactional Replication. The data only needs to copy one way. We have two main requirements: 1) The destination database needs to be a max 15 minute old copy of the source. It needs to re-try and get up-to-date if a network cable is unplugged for a while. 2) We would really like table (fields added or modified) changes in the source as easy as possible. Thanks in advance for all suggestions.

    Read the article

< Previous Page | 387 388 389 390 391 392 393 394 395 396 397 398  | Next Page >