Search Results

Search found 21310 results on 853 pages for 'multiple domains'.

Page 372/853 | < Previous Page | 368 369 370 371 372 373 374 375 376 377 378 379  | Next Page >

  • Simplest possible table entry editing / addition / deletion web toolkit (for use with php)

    - by Dave
    Hi, i'm building a website in php and i have tables presented that i need to allow the user to: 1. add new entry (only one at a time, which should appear as a new modal overlay) 2. delete multiple selected entries from 3. edit an existing entry (only one at one time, in a view similar to 1.) 4. re-arrange entries up and down. One by one is fine. Multiple / Grouping rearrangements are not not needed what jquery / js / anything toolkit would be the SIMPLEST to work with? (of course, i should be able to work with it in php). I did try hacking away at: http://www.ericmmartin.com/projects/simplemodal/ but had a terrible time trying to get it to work on editing some existing data (had problem passing data to it).

    Read the article

  • Binding Many-to-Many Core Data relationships in UI

    - by Kevin
    Basically my setup is this. I have a many-to-many relationship in Core Data where a student entity can have multiple courses, and a course entity can have multiple students. My problem is in trying to figure out how to bind this relationship to the UI in Interface Builder. I want to be able to add courses to a course array controller, then have those courses displayed in a popup menu in a NSTableView in the Edit Student window where you can add courses to a student. This is what I have so far: http://vimeo.com/10671726 It's probably easier to understand from the video. Thanks

    Read the article

  • Fast single thread comet server, possible?

    - by Pepijn
    I recently encountered a few cases where a server would distribute an event stream that contains the exact same data for all listeners, such as a 'recent activity' box. It occurred to me that it is quite strange and inefficient to have a server like Apache run a thread processing and querying the database for every single comet stream containing the same data. What I would do for those global(not per user) streams is run a single thread that continuously emits data, and a new (green)thread for every new request that outputs the headers and then 'merges' into the main thread. Is it possible for one thread to serve multiple sockets, or for multiple clients to listen to the same socket? An example o = event # threads received | a b # 3 o / / # 3 - |/_/ | # 1 o c # 2 a, b | / o/ # 2 a, b o # 1 a, b, c | # connection b closed o # 1 a, c Does something like this exist? Would it work? Is it possible to do? Disclaimer: I'm not a server expert.

    Read the article

  • Open Source Queuing Solutions for peek, mark as done and then remove

    - by user330114
    I am looking at open source queuing platforms that allow me do the following: I have multiple producers, multiple consumers putting data into a queue in a multithreaded environment with the specific use case: I want the ability for consumers to be able do the following Peek at a message from the queue(which should mark as the message as invisible on the queue so that other consumers cannot consume the same message) The consumer works on the message consumed and if it is able to do the work successfully, it marks the message as consumed which should permanently delete it from the queue. If the consumer dies abruptly after marking the message as consumed or fails to acknowledge successful consumption after a certain timeout, the message is made visible on the queue again so that another consumer can pick it up. I've been looking at RabbitMQ, hornetQ, ActiveMQ but I'm not sure I can get this functionality out of the box, any recommendations on a system that gives me this functionality?

    Read the article

  • Why doesn't Maven's mvn clean ever work the first time?

    - by hoffmandirt
    Nine times out of ten when I run mvn clean on my projects I experience a build error. I have to execute mvn clean multiple times until the build error goes away. Does anyone else experience this? Is there any way to fix this within Maven? If not, how do you get around it? I wrote a bat file that deletes the target folders and that works well, but it's not practical when you are working on multiple projects. I am using Maven 2.2.1. [ERROR] BUILD ERROR [INFO] ------------------------------------------------------------------------ [INFO] Failed to delete directory: C:\Documents and Settings\user\My Documents\software-developm ent\a\b\c\application-domain\target. Reason: Unable to delete directory C:\Documen ts and Settings\user\My Documents\software-development\a\b\c\application-domai n\target\classes\com\a\b [INFO] ------------------------------------------------------------------------ [INFO] For more information, run Maven with the -e switch [INFO] ------------------------------------------------------------------------ [INFO] Total time: 6 seconds [INFO] Finished at: Fri Oct 23 15:22:48 EDT 2009 [INFO] Final Memory: 11M/254M [INFO] ------------------------------------------------------------------------

    Read the article

  • Power Law distribution for a given exponent in C# using MathNet

    - by Eric Tobias
    Hello! I am currently working on a project where I need to generate multiple values (floats or doubles preferably) that follow a power law distribution with a given exponent! I was advised to use the MathNet.Iridium library to help me. The problem I have is that the documentation is not as explicit as it should be if there is any! I see multiple distributions that fit the general idea of the power law distribution but I cannot pinpoint a good distribution to use with a certain exponent as a parameter. Does anybody have more experience in that matter and could give me some hints or advice?

    Read the article

  • How to create following Pivot table ?

    - by Vamshi
    Hi! Can we create pivot table with Multiple columns and each column contains multiple rows. For example........... Database Table: BatchID BatchName Chemical Value -------------------------------------------------------- BI-1 BN-1 CH-1 1 BI-2 BN-2 CH-2 2 -------------------------------------------------------- This is the table , i need to display like below in Excel Sheet BI-1 BI-2 BN-1 BN-2 ------------------------------------------ CH-1 1 null ------------------------------------------ CH-2 null 2 ------------------------------------------ Here BI-1,BN-1 are two rows in a single columns i need to display chemical value as row of that. Could Please help me to solve this problem. Thank You.

    Read the article

  • I need an algorithm to find the best path

    - by user242635
    I need an algorithm to find the best solution of a path finding problem. The problem can be stated as: At the starting point I can proceed along multiple different paths. At each step there are another multiple possible choices where to proceed. There are two operations possible at each step: A boundary condition that determine if a path is acceptable or not. A condition that determine if the path has reached the final destination and can be selected as the best one. At each step a number of paths can be eliminated, letting only the "good" paths to grow. I hope this sufficiently describes my problem, and also a possible brute force solution. My question is: is the brute force is the best/only solution to the problem, and I need some hint also about the best coding structure of the algorithm.

    Read the article

  • Five Key Strategies in Master Data Management

    - by david.butler(at)oracle.com
    Here is a very interesting Profit Magazine article on MDM: A recent customer survey reveals the deleterious effects of data fragmentation. by Trevor Naidoo, December 2010   Across industries and geographies, IT organizations have grown in complexity, whether due to mergers and acquisitions, or decentralized systems supporting functional or departmental requirements. With systems architected over time to support unique, one-off process needs, they are becoming costly to maintain, and the Internet has only further added to the complexity. Data fragmentation has become a key inhibitor in delivering flexible, user-friendly systems. The Oracle Insight team conducted a survey assessing customers' master data management (MDM) capabilities over the past two years to get a sense of where they are in terms of their capabilities. The responses, by 27 respondents from six different industries, reveal five key areas in which customers need to improve their data management in order to get better financial results. 1. Less than 15 percent of organizations surveyed understand the sources and quality of their master data, and have a roadmap to address missing data domains. Examples of the types of master data domains referred to are customer, supplier, product, financial and site. Many organizations have multiple sources of master data with varying degrees of data quality in each source -- customer data stored in the customer relationship management system is inconsistent with customer data stored in the order management system. Imagine not knowing how many places you stored your customer information, and whether a customer's address was the most up to date in each source. In fact, more than 55 percent of the respondents in the survey manage their data quality on an ad-hoc basis. It is important for organizations to document their inventory of data sources and then profile these data sources to ensure that there is a consistent definition of key data entities throughout the organization. Some questions to ask are: How do we define a customer? What is a product? How do we define a site? The goal is to strive for one common repository for master data that acts as a cross reference for all other sources and ensures consistent, high-quality master data throughout the organization. 2. Only 18 percent of respondents have an enterprise data management strategy to ensure that data is treated as an asset to the organization. Most respondents handle data at the department or functional level and do not have an enterprise view of their master data. The sales department may track all their interactions with customers as they move through the sales cycle, the service department is tracking their interactions with the same customers independently, and the finance department also has a different perspective on the same customer. The salesperson may not be aware that the customer she is trying to sell to is experiencing issues with existing products purchased, or that the customer is behind on previous invoices. The lack of a data strategy makes it difficult for business users to turn data into information via reports. Without the key building blocks in place, it is difficult to create key linkages between customer, product, site, supplier and financial data. These linkages make it possible to understand patterns. A well-defined data management strategy is aligned to the business strategy and helps create the governance needed to ensure that data stewardship is in place and data integrity is intact. 3. Almost 60 percent of respondents have no strategy to integrate data across operational applications. Many respondents have several disparate sources of data with no strategy to keep them in sync with each other. Even though there is no clear strategy to integrate the data (see #2 above), the data needs to be synced and cross-referenced to keep the business processes running. About 55 percent of respondents said they perform this integration on an ad hoc basis, and in many cases, it is done manually with the help of Microsoft Excel spreadsheets. For example, a salesperson needs a report on global sales for a specific product, but the product has different product numbers in different countries. Typically, an analyst will pull all the data into Excel, manually create a cross reference for that product, and then aggregate the sales. The exact same procedure has to be followed if the same report is needed the following month. A well-defined consolidation strategy will ensure that a central cross-reference is maintained with updates in any one application being propagated to all the other systems, so that data is synchronized and up to date. This can be done in real time or in batch mode using integration technology. 4. Approximately 50 percent of respondents spend manual efforts cleansing and normalizing data. Information stored in various systems usually follows different standards and formats, making it difficult to match the data. A customer's address can be stored in different ways using a variety of abbreviations -- for example, "av" or "ave" for avenue. Similarly, a product's attributes can be stored in a number of different ways; for example, a size attribute can be stored in inches and can also be entered as "'' ". These types of variations make it difficult to match up data from different sources. Today, most customers rely on manual, heroic efforts to match, cleanse, and de-duplicate data -- clearly not a scalable, sustainable model. To solve this challenge, organizations need the ability to standardize data for customers, products, sites, suppliers and financial accounts; however, less than 10 percent of respondents have technology in place to automatically resolve duplicates. It is no wonder, therefore, that we get communications about products we don't own, at addresses we don't reside, and using channels (like direct mail) we don't like. An all-too-common example of a potential challenge follows: Customers end up receiving duplicate communications, which not only impacts customer satisfaction, but also incurs additional mailing costs. Cleansing, normalizing, and standardizing data will help address most of these issues. 5. Only 10 percent of respondents have the ability to share data that was mastered in a master data hub. Close to 60 percent of respondents have efforts in place that profile, standardize and cleanse data manually, and the output of these efforts are stored in spreadsheets in various parts of the organization. This valuable information is not easily shared with the rest of the organization and, more importantly, this enriched information cannot be sent back to the source systems so that the data is fixed at the source. A key benefit of a master data management strategy is not only to clean the data, but to also share the data back to the source systems as well as other systems that need the information. Aside from the source systems, another key beneficiary of this data is the business intelligence system. Having clean master data as input to business intelligence systems provides more accurate and enhanced reporting.  Characteristics of Stellar MDM When deciding on the right master data management technology, organizations should look for solutions that have four main characteristics: enterprise-grade MDM performance complete technology that can be rapidly deployed and addresses multiple business issues end-to-end MDM process management with data quality monitoring and assurance pre-built MDM business relevant applications with data stores and workflows These master data management capabilities will aid in moving closer to a best-practice maturity level, delivering tremendous efficiencies and savings as well as revenue growth opportunities as a result of better understanding your customers.  Trevor Naidoo is a senior director in Industry Strategy and Insight at Oracle. 

    Read the article

  • How can I model the data in a multi-language data editor in WPF with MVVM?

    - by Patrick Szalapski
    Are there any good practices to follow when designing a model/ViewModel to represent data in an app that will view/edit that data in multiple languages? Our top-level class--let's call it Course--contains several collection properties, say Books and TopicsCovered, which each might have a collection property among its data. For example, the data needs to represent course1.Books.First().Title in different languages, and course1.TopicsCovered.First().Name in different languages. We want a app that can edit any of the data for one given course in any of the available languages--as well as edit non-language-specific data, perhaps the Author of a Book (i.e. course1.Books.First().Author). We are having trouble figuring out how best to set up the model to enable binding in the XAML view. For example, do we replace (in the single-language model) each String with a collection of LanguageSpecificString instances? So to get the author in the current language: course1.Books.First().Author.Where(Function(a) a.Language = CurrentLanguage).SingleOrDefault If we do that, we cannot easily bind to any value in one given language, only to the collection of language values such as in an ItemsControl. <TextBox Text={Binding Author.???} /> <!-- no way to bind to the current language author --> Do we replace the top-level Course class with a collection of language-specific Courses? So to get the author in the current language: course1.GetLanguage(CurrentLanguage).Books.First.Author If we do that, we can only easily work with one language at a time; we might want a view to show one language and let the user edit the other. <TextBox Text={Binding Author} /> <!-- good --> <TextBlock Text={Binding ??? } /> <!-- no way to bind to the other language author --> Also, that has the disadvantage of not representing language-neutral data as such; every property (such as Author) would seem to be in multiple languages. Even non-string properties would be in multiple languages. Is there an option in between those two? Is there another way that we aren't thinking of? I realize this is somewhat vague, but it would seem to be a somewhat common problem to design for. Note: This is not a question about providing a multilingual UI, but rather about actually editing multi-language data in a flexible way.

    Read the article

  • Communication between Page and User Controls

    - by Narmatha Balasundaram
    I have a main page that has multiple user controls on it. All the user controls have static text and some data to be retrieved from the DB. The aysnchronous DB call is clubbed at the page level and one call is made to avoid multiple calls (in the different user controls) to get the same data. I want the page and user controls to load initially with the static text and later refresh the contents obtained from the server. To sum it all up, Page Loads = Asyn calls fired on page load = Data received back from the server (XML/text/whatever) = User controls to load with this data using AJAX. What methods do I have to let the user controls know that I have the data from the server and they need to update with this data?

    Read the article

  • Combine static files or load in parallel

    - by Niall Collins
    I am at present introducing code to my site to combine css and javascript files. Is there a way without having to include an external library to load javascript asynchronously or in parallel? I have read on some blogs that combining of files can be counter productive as the load of the http request can be large and its better to load multiple files in parallel. Opinions on this? I am caching my javascript/css. And would have thought it was better to combine rather than have multiple http requests.

    Read the article

  • Can I use array_push on a SESSION array in php?

    - by zeckdude
    I have an array that I want on multiple pages, so I made it a SESSION array. I want to add a series of names and then on another page, I want to be able to use a foreach loop to echo out all the names in that array. This is the session: $_SESSION['names'] I want to add a series of names to that array using array_push like this: array_push($_SESSION['names'],$name); I am getting this error: array_push() [function.array-push]: First argument should be an array Can I use array_push to put multiple values into that array? Or perhaps there is a better, more efficient way of doing what I am trying to achieve?

    Read the article

  • Track pageviews in Google Analytics on partial url of Grails application

    - by Pieter van Gent
    For my Grails application I want to set up Google Analytics to track only "partial" url's. I 'll explain: a typical Grails url consists of the following parts: domain + application-name + controller + action + id e.g. www.mydomain.com/myapp/controller/action/12345 As far as I understand for Google Analytics the page to be tracked is identified by the entire url. For my purpose I'm not interested in the id part of the url: I want to know which actions have been performed, but I need not know for which id the action was executed. And of course I would like a generic solution, because I have multiple controllers and multiple actions... Maybe some kind of filter stating "I want to track pages 3 levels deep (/myapp/controller/action)" would do? Or a filter stating "exclude everything from url after the last /"? Any help would be much appreciated. Kind regards, Pieter

    Read the article

  • Retrieve the CAKeyframeAnimationKey's key in AnimationDidStop

    - by JacktheSparrow
    I have multiple CAKeyframeAnimation objects, each with a unique key like the following: ..... [myAnimation setValues:images]; [myAnimation setDuration:1]; .... [myLayer addAnimation:myAnimation forKey:@"unique key"]; My question is, if I have multiple animation like this and each with a unique key, how do I retrieve their keys in the method AnimationDidStop? I want to be able to do something like this: -(void)animationDidStop:(CAAnimation*)animation finished:(BOOL)flag{ if(..... ==@"uniquekey1"){ //code to handle this specific animation here: }else if(.... ==@"uiquekey2"){ //code to handle this specific animation here: } }

    Read the article

  • How to deal with many to many relationships with NSFetchedResultsController?

    - by Phil Yates
    OK so I have two entities in my data model (let's say entityA and entityB), both of these entities have a to-many relationship to each other. I have setup a NSFetchedResultsController to fetch a bunch of entityA. Now I'm trying to have the section names for the tableview be the title of entityB. sectionNameKeyPath:@"entityB.title" Now this causes a problem, where by the section name returned from that relationship appears to be ({title1}) or ({title1,title2...titleN}) obviously depending on how many different entityB's are involved. This doesn't look great in a tableview and doesn't group the objects as I would like. What I would like is a section per entityB title with entityA appearing under each section, under multiple sections if necessary. I'm at a loss as how I am supposed to achieve this whether I need to update the predicate to get the entity to appear multiple times or whether I need to update the section and header functions to do some processing as the controller loops through the objects. Any help is appreciated :) Thanks

    Read the article

  • Socket : can an asynchronous Receive returns without reading all the bytes I asked for?

    - by NorthWind
    Hi; I was reading an article on Vadym Stetsiak's blog about how to transfer variable length messages with async sockets (url: http://vadmyst.blogspot.com/2008/03/part-2-how-to-transfer-fixed-sized-data.html). He says : What to expect when multiple messages arrive at the server? While dealing with multiple messages one has to remember that receive operation can return arbitrary number of bytes being read from the net. Typically that size is from 0 to specified buffer length in the Receive or BeginReceive methods. So, even if I tell BeginReceive to read 100 bytes, it may read less than that and returns??? I am developing a network-enabled software (TCP/IP), and I always receive the same exact number of bytes I asked for. I don't even understand the logic : why would Receive completes asynchronously if it didn't get every byte I asked for ... just keep waiting. Maybe it has something to do with IP vs TCP? Thank you for your help.

    Read the article

  • How can you toggle between two sets of values per data series in flot?

    - by Jedidja
    flot has built-in support for multiple data series (sample code) and also dual-axis (sample code). Assuming multiple data series (water, electricity, etc) that each have an amount (usage) and a dollar value (charge for that usage), what would the best way be to to use flot to display either the amount or dollar values for all the data series, while still supporting toggling display for each individual series? The idea is to send down all the data in one GET request and then let the client take care of everything else in Javascript. Ideally we could use triplets somehow {date, amount, charge}, and then possibly split that into two arrays for flot.

    Read the article

  • Using installshield to replace a same-versioned DLL in the GAC

    - by Kevin
    We recently put out an update of one of our apps with a "test" DLL from a third party. The third party does not update their assembly versions on the dll's, only the file versions, so multiple apps can reference different "versions" of it. However, the GAC still allows us to keep the newest version, because it also checks the file version which is always updated. What happened is we were not ready to release this DLL, but it got out there on some customer machines. I would like to put our current live version back out there, but it has an older file version (and the same assembly version) as the test DLL. We have multiple apps referencing this DLL, so I can't simply delete it and drop in the new one. Is there a way to replace the DLL in the GAC? I'm using installshield 2009. Perhaps some sort of custom action upon install?

    Read the article

  • Is frameworks really necessary for this problem?

    - by The Elite Gentleman
    Hi Guys I'm creating an online music store application (in Java) for signed and unsigned artist for my client. I'm currently using Struts 1.3.10 (I was recommended Spring but Spring setup is sort of similar to Struts) for my Web application. My Database is currently a MySQL 5 (or higher) and I'm using a DAO pattern to talk to it. There are limitations to using Struts and DAO's (e.g. Multiple File upload in Struts is not implemented the same way as multiple String parameters, and for DAO's, there lacks a Publish-Subscribe feature). Is what I'm doing the best way forward or should I go straight Hibernate (or similar) and move out of Struts? What are the performance implications or technical issues that you've experienced with the same setup I have? The client doesn't care how I do it as long as it is done.

    Read the article

  • ODEE Green Field (Windows) Part 4 - Documaker

    - by AndyL-Oracle
    Welcome back! We're about nearing completion of our installation of Oracle Documaker Enterprise Edition ("ODEE") in a green field. In my previous post, I covered the installation of SOA Suite for WebLogic. Before that, I covered the installation of WebLogic, and Oracle 11g database - all of which constitute the prerequisites for installing ODEE. Naturally, if your environment already has a WebLogic server and Oracle database, then you can skip all those components and go straight for the heart of the installation of ODEE. The ODEE installation is comprised of two procedures, the first covers the installation, which is running the installer and answering some questions. This will lay down the files necessary to install into the tiers (e.g. database schemas, WebLogic domains, etcetera). The second procedure is to deploy the configuration files into the various components (e.g. deploy the database schemas, WebLogic domains, SOA composites, etcetera). I will segment my posts accordingly! Let's get started, shall we? Unpack the installation files into a temporary directory location. This should extract a zip file. Extract that zip file into the temporary directory location. Navigate to and execute the installer in Disk1/setup.exe. You may have to allow the program to run if User Account Control is enabled. Once the dialog below is displayed, click Next. Select your ODEE Home - inside this directory is where all the files will be deployed. For ease of support, I recommend using the default, however you can put this wherever you want. Click Next. Select the database type, database connection type – note that the database name should match the value used for the connection type (e.g. if using SID, then the name should be IDMAKER; if using ServiceName, the name should be “idmaker.us.oracle.com”). Verify whether or not you want to enable advanced compression. Note: if you are not licensed for Oracle 11g Advanced Compression option do not use this option! Terrible, terrible calamities will befall you if you do! Click Next. Enter the Documaker Admin user name (default "dmkr_admin" is recommended for support purposes) and set the password. Update the System name and ID (must be unique) if you want/need to - since this is a green field install you should be able to use the default System ID. The only time you'd change this is if you were, for some reason, installing a new ODEE system into an existing schema that already had a system. Click Next. Enter the Assembly Line user name (default "dmkr_asline" is recommended) and set the password. Update the Assembly Line name and ID (must be unique) if you want/need to - it's quite possible that at some point you will create another assembly line, in which case you have several methods of doing so. One is to re-run the installer, and in this case you would pick a different assembly line ID and name. Click Next. Note: you can set the DB folder if needed (typically you don’t – see ODEE Installation Guide for specifics. Select the appropriate Application Server type - in this case, our green field install is going to use WebLogic - set the username to weblogic (this is required) and specify your chosen password. This credential will be used to access the application server console/control panel. Keep in mind that there are specific criteria on password choices that are required by WebLogic, but are not enforced by the installer (e.g. must contain a number, must be of a certain length, etcetera). Choose a strong password. Set the connection information for the JMS server. Note that for the 12.3.x version, the installer creates a separate JVM (WebLogic managed server) that hosts the JMS server, whereas prior editions place the JMS server on the AdminServer.  You may also specify a separate URL to the JMS server in case you intend to move the JMS resources to a separate/different server (e.g. back to AdminServer). You'll need to provide a login principal and credentials - for simplicity I usually make this the same as the WebLogic domain user, however this is not a secure practice! Make your JMS principal different from the WebLogic principal and choose a strong password, then click Next. Specify the Hot Folder(s) (comma-delimited if more than one) - this is the directory/directories that is/are monitored by ODEE for jobs to process. Click Next. If you will be setting up an SMTP server for ODEE to send emails, you may configure the connection details here. The details required are simple: hostname, port, user/password, and the sender's address (e.g. emails will appear to be sent by the address shown here so if the recipient clicks "reply", this is where it will go). Click Next. If you will be using Oracle WebCenter:Content (formerly known as Oracle UCM) you can enable this option and set the endpoints/credentials here. If you aren't sure, select False - you can always go back and enable this later. I'm almost 76% certain there will be a post sometime in the future that details how to configure ODEE + WCC:C! Click Next. If you will be using Oracle UMS for sending MMS/text messages, you can enable and set the endpoints/credentials here. As with UCM, if you're not sure, don't enable it - you can always set it later. Click Next. On this screen you can change the endpoints for the Documaker Web Service (DWS), and the endpoints for approval processing in Documaker Interactive. The deployment process for ODEE will create 3 managed WebLogic servers for hosting various Documaker components (JMS, Interactive, DWS, Dashboard, Documaker Administrator, etcetera) and it will set the ports used for each of these services. In this screen you can change these values if you know how you want to deploy these managed servers - but for now we'll just accept the defaults. Click Next. Verify the installation details and click Install. You can save the installation into a response file if you need to (which might be useful if you want to rerun this installation in an unattended fashion). Allow the installation to progress... Click Next. You can save the response file if needed (e.g. in case you forgot to save it earlier!) Click Finish. That's it, you're done with the initial installation. Have a look around the ODEE_HOME that you just installed (remember we selected c:\oracle\odee_1?) and look at the files that are laid down. Don't change anything just yet! Stay tuned for the next segment where we complete and verify the installation. 

    Read the article

  • Generate all project dependencies in a single file using gcc -MM flag

    - by Jabez
    Hi all, I want to generate a single dependency file which consists of all the dependencies of source files using gcc -M flags through Makefile. I googled for this solution but, all the solutions mentioned are for generating multiple deps files for multiple objects. DEPS = make.dep $(OBJS): $(SOURCES) @$(CC) -MM $(SOURCEs) > $(DEPS) @mv -f $(DEPS) $(DEPS).tmp @sed -e 's|.$@:|$@:|' < $(DEPS).tmp > $(DEPS) @sed -e 's/.*://' -e 's/\\$$//' < $(DEPS).tmp | fmt -1 | \ sed -e 's/^ *//' -e 's/$$/:/' >> $(DEPS) @rm -f $(DEPS).tmp But it is not working properly. Please tell me where i'm making the mistake.

    Read the article

  • How do I check the validity of the Canadian Social Insurance Number in C#?

    - by user518307
    I've been given the assignment to write an algorithm in C# that checks the validity of a Canadian Social Insurance Number (SIN). Here are the steps to validate a SIN. Given an example Number: 123 456 782 Remove the check digit (the last digit): 123456782 Extract the even digits (2,4,6,8th digith): 12345678 Double them: 2 4 6 8 | | | | v v v v 4 8 12 16 Add the digits together: 4+8+1+2+1+6 = 22 Add the Odd placed digits: 1+3+5+7 = 16 Total : 38 Validity Algorithm If the total is a multiple of 10, the check digit should be zero. Otherwise, Subtract the Total from the next highest multiple of 10 (40 in this case) The check digit for this SIN must be equal to the difference of the number and the totals from earlier (in this case, 40-38 = 2; check digit is 2, so the number is valid) I'm lost on how to actually implement this in C#, how do I do this?

    Read the article

< Previous Page | 368 369 370 371 372 373 374 375 376 377 378 379  | Next Page >