Search Results

Search found 206 results on 9 pages for 'mutual exclusion'.

Page 3/9 | < Previous Page | 1 2 3 4 5 6 7 8 9  | Next Page >

  • File I/O OS handling

    - by Albinoswordfish
    This isn't a direct coding question but more of a OS handling mechanism. I was reading somebody's previous question regarding C# and file handling. Apparently C# was throwing an exception regarding a file being locked when trying to access this. So my question is, does C# use an internal lock to handle file I/O between processes, or does the OS use some type of mutual exclusion for file I/O? From what I learned about operating systems, well at least unix, is that the OS doesn't implement any type of mutual exclusion for processes trying to access the same file.

    Read the article

  • Response to Software Exception in Patent Bill

    <b>NZOSS:</b> "Law firms that supported continued software patents have published critiques of the arguments put forward by those who opposed software patents and asked for an exclusion to be added to the Patent Bill. In this article Peter Harrison, vice President of the NZOSS responds."

    Read the article

  • How do you exclude yourself from Google Analytics on your website using cookies?

    - by Keoki Zee
    I'm trying to set up an exclusion filter with a browser cookie, so that my own visits to my don't show up in my Google Analytics. I tried 3 different methods and none of them have worked so far. I would like help understanding what I am doing wrong and how I can fix this. Method 1 First, I tried following Google's instructions, http://www.google.com/support/analytics/bin/answer.py?hl=en&answer=55481, for excluding traffic by Cookie Content: Create a new page on your domain, containing the following code: <body onLoad="javascript:pageTracker._setVar('test_value');"> Method 2 Next, when that didn't work, I googled around and found this Google thread, http://www.google.com/support/forum/p/Google%20Analytics/thread?tid=4741f1499823fcd5&hl=en, where the most popular answer says to use a slightly different code: SHS Analytics wrote: <body onLoad="javascript:_gaq.push(['_setVar','test_value']);"> Thank you! This has now set a __utmv cookie containing "test_value", whereas the original: pageTracker._setVar('test_value') (which Google is still recommending) did not manage to do that for me (in Mac Safari 5 and Firefox 3.6.8). So I tried this code, but it didn't work for me. Method 3 Finally, I searched StackOverflow and came across this thread, http://stackoverflow.com/questions/3495270/exclude-my-traffic-from-google-analytics-using-cookie-with-subdomain, which suggests that the following code might work: <script type="text/javascript"> var _gaq = _gaq || []; _gaq.push(['_setVar', 'exclude_me']); _gaq.push(['_setAccount', 'UA-xxxxxxxx-x']); _gaq.push(['_trackPageview']); // etc... </script> This script appeared in the head element in the example, instead of in the onload event of the body like in the previous 2 examples. So I tried this too, but still had no luck with trying to exclude myself from Google Analytics. Re-iterate question So, I tried all 3 methods above with no success. Am I doing something wrong? How can I exclude myself from my Google Analytics using an exclusion cookie for my browser?

    Read the article

  • How do you exclude yourself from Google Analytics on your website using cookies?

    - by Cold Hawaiian
    I'm trying to set up an exclusion filter with a browser cookie, so that my own visits to my don't show up in my Google Analytics. I tried 3 different methods and none of them have worked so far. I would like help understanding what I am doing wrong and how I can fix this. Method 1 First, I tried following Google's instructions, http://www.google.com/support/analytics/bin/answer.py?hl=en&answer=55481, for excluding traffic by Cookie Content: Create a new page on your domain, containing the following code: <body onLoad="javascript:pageTracker._setVar('test_value');"> Method 2 Next, when that didn't work, I googled around and found this Google thread, http://www.google.com/support/forum/p/Google%20Analytics/thread?tid=4741f1499823fcd5&hl=en, where the most popular answer says to use a slightly different code: SHS Analytics wrote: <body onLoad="javascript:_gaq.push(['_setVar','test_value']);"> Thank you! This has now set a __utmv cookie containing "test_value", whereas the original: pageTracker._setVar('test_value') (which Google is still recommending) did not manage to do that for me (in Mac Safari 5 and Firefox 3.6.8). So I tried this code, but it didn't work for me. Method 3 Finally, I searched StackOverflow and came across this thread, http://stackoverflow.com/questions/3495270/exclude-my-traffic-from-google-analytics-using-cookie-with-subdomain, which suggests that the following code might work: <script type="text/javascript"> var _gaq = _gaq || []; _gaq.push(['_setVar', 'exclude_me']); _gaq.push(['_setAccount', 'UA-xxxxxxxx-x']); _gaq.push(['_trackPageview']); // etc... </script> This script appeared in the head element in the example, instead of in the onload event of the body like in the previous 2 examples. So I tried this too, but still had no luck with trying to exclude myself from Google Analytics. Re-iterate question So, I tried all 3 methods above with no success. Am I doing something wrong? How can I exclude myself from my Google Analytics using an exclusion cookie for my browser? Update I've been testing this for several days now, and I've confirmed that the 2nd method of excluding yourself from tracking does indeed work. The problem was that the filter settings weren't properly applied to my profile, which has been corrected. See the accepted answer below.

    Read the article

  • Apple's Journey to the Top

    OS Roundup: Was it exclusion or exclusivity that fueled Apple's journey to being the world's biggest technology company? Or perhaps it's Apple's understanding and ability that when the producer names the tune, the consumer must dance.

    Read the article

  • Apple's Journey to the Top

    OS Roundup: Was it exclusion or exclusivity that fueled Apple's journey to being the world's biggest technology company? Or perhaps it's Apple's understanding and ability that when the producer names the tune, the consumer must dance.

    Read the article

  • Invalid SQL Query

    - by svovaf
    I have the next query that in my opinion is a valid one, but I keep getting error telling me that there is a proble on "WHERE em.p4 = ue.p3" - Unknown column 'ue.p3' in 'where clause'. This is the query: SELECT DISTINCT ue.p3 FROM table1 AS ue INNER JOIN table2 AS e ON ue.p3 = e.p3 WHERE EXISTS( SELECT 1 FROM ( SELECT (COUNT(*) >= 1) AS MinMutual FROM table4 AS smm WHERE smm.p1 IN ( SELECT sem.p3 FROM table3 AS sem INNER JOIN table2 AS em ON sem.p3 = em.p3 WHERE em.p4 = ue.p3 AND sem.type = 'friends' AND em.p2 = 'normal' ) AND smm.p5 IN ( 15000,15151 ) ) AS Mutual WHERE Mutual.MinMutual = TRUE) LIMIT 11 If I execute the sub-query which is inside the EXISTS function, everything is O.K. PLEASE HELP!

    Read the article

  • Can I do this in only one query ?

    - by Paté
    Merry christmas everyone, I Know my way around SQL but I'm having a hard time figuring this one out. First here are my tables (examples) User id name friend from //userid to //userid If user 1 is friend with user 10 then you a row with 1,10. User 1 cannot be friend with user 10 if user 10 is not friend with user 1 so you have 1,10 10,1 It may look weird but I need those two rows per relations. Now I'm trying to make a query to select the users that have the most mutual friend with a given user. For example User 1 is friend with user 10,9 and 7 and user 8 is friend with 10,9 and 7 too ,I want to suggest user 1 to invite him (like facebook). I want to get like the 10 first people with the most mutual friend. The output would be like User,NumOfMutualFriends I dont know if that can be done in a single query ? Thanks in advance for any help.

    Read the article

  • Clustered MSDTC

    - by niel
    Hi I'm setting up a SQL cluster (SQL 2008), Windows 2008 R2. I enable the network access on local dtc and then create a DTC resource in my cluster . the problem is that when i start up the resource it does nto pull through my settings to enable network access. the log shows this: MSDTC started with the following settings: Security Configuration (OFF = 0 and ON = 1): Allow Remote Administrator = 0, Network Clients = 0, Trasaction Manager Communication: Allow Inbound Transactions = 0, Allow Outbound Transactions = 0, Transaction Internet Protocol (TIP) = 0, Enable XA Transactions = 0, Enable SNA LU 6.2 Transactions = 1, MSDTC Communications Security = Mutual Authentication Required, Account = NT AUTHORITY\NetworkService, Firewall Exclusion Detected = 0 Transaction Bridge Installed = 0 Filtering Duplicate Events = 1 where when i restart the local dtc service it says this: Security Configuration (OFF = 0 and ON = 1): Allow Remote Administrator = 0, Network Clients = 1, Trasaction Manager Communication: Allow Inbound Transactions = 1, Allow Outbound Transactions = 1, Transaction Internet Protocol (TIP) = 0, Enable XA Transactions = 1, Enable SNA LU 6.2 Transactions = 1, MSDTC Communications Security = No Authentication Required, Account = NT AUTHORITY\NetworkService, Firewall Exclusion Detected = 0 Transaction Bridge Installed = 0 Filtering Duplicate Events = 1 settings on both nodes in teh cluster is the same. I have reinstalled and restarted to many times to mention. Any ideas ?

    Read the article

  • Migrate existing Maven Project into an OSGI Bundle

    - by user1706291
    i am new to the whole OSGi stuff and my task is to create an OSGi Bundle out from an exisitng maven project. To get started i decided to pick the smallest part and starting with it: Here is the pom.xml project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"> <modelVersion>4.0.0</modelVersion> <parent> <artifactId>cross</artifactId> <groupId>net.sf.maltcms</groupId> <version>1.2.12-SNAPSHOT</version> </parent> <artifactId>cross-main</artifactId> <packaging>jar</packaging> <name>cross-main</name> <dependencies> <dependency> <groupId>${project.groupId}</groupId> <artifactId>cross-annotations</artifactId> <version>${project.version}</version> </dependency> <dependency> <groupId>${project.groupId}</groupId> <artifactId>cross-event</artifactId> <version>${project.version}</version> </dependency> <dependency> <groupId>${project.groupId}</groupId> <artifactId>cross-tools</artifactId> <version>${project.version}</version> </dependency> <dependency> <groupId>${project.groupId}</groupId> <artifactId>cross-exception</artifactId> <version>${project.version}</version> </dependency> <dependency> <groupId>commons-codec</groupId> <artifactId>commons-codec</artifactId> <version>1.4</version> </dependency> <dependency> <groupId>${project.groupId}</groupId> <artifactId>cross-main-api</artifactId> <version>${project.version}</version> <exclusions> <exclusion> <artifactId>commons-logging</artifactId> <groupId>commons-logging</groupId> </exclusion> </exclusions> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-aop</artifactId> <version>3.0.6.RELEASE</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-asm</artifactId> <version>3.0.6.RELEASE</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-beans</artifactId> <version>3.0.6.RELEASE</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-context</artifactId> <version>3.0.6.RELEASE</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-core</artifactId> <version>3.0.6.RELEASE</version> <exclusions> <exclusion> <artifactId>commons-logging</artifactId> <groupId>commons-logging</groupId> </exclusion> </exclusions> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-expression</artifactId> <version>3.0.6.RELEASE</version> </dependency> <dependency> <groupId>commons-io</groupId> <artifactId>commons-io</artifactId> <version>2.1</version> </dependency> <dependency> <groupId>net.sf.ehcache</groupId> <artifactId>ehcache-core</artifactId> <version>2.4.6</version> </dependency> <dependency> <groupId>${project.groupId}</groupId> <artifactId>cross-math</artifactId> <version>${project.version}</version> </dependency> <dependency> <groupId>com.db4o</groupId> <artifactId>db4o-all</artifactId> <version>8.0.249</version> </dependency> <dependency> <groupId>net.sf.mpaxs</groupId> <artifactId>mpaxs-spi</artifactId> <version>1.6.10</version> </dependency> <dependency> <groupId>net.sf.mpaxs</groupId> <artifactId>mpaxs-server</artifactId> <version>1.6.10</version> </dependency> </dependencies> I did some research and found the Apache Bundle Plugin for maven and changed the pom to this <packaging>bundle</packaging> and added <build> <plugins> <plugin> <groupId>org.apache.felix</groupId> <artifactId>maven-bundle-plugin</artifactId> <extensions>true</extensions> <configuration> <instructions> <Bundle-SymbolicName>${pom.artifactId}</Bundle-SymbolicName> </instructions> </configuration> </plugin> </plugins> </build> mvn clean install went fine and i got a jar file containing the manifest, but of course the bundle could not be resolved BundleException: The bundle "cross-main_1.2.12.SNAPSHOT [30]" could not be resolved. Reason: Missing Constraint: Import-Package: com.db4o; version="[8.0.0,9.0.0) To make a long story short: What are the possibiliteis to migrate a maven application into an OSGi Bundle? Espacially how to manage the dependencys

    Read the article

  • Enterprise MDM: Rationalizing Reference Data in a Fast Changing Environment

    - by Mala Narasimharajan
    By Rahul Kamath Enterprises must move at a rapid pace to establish and retain global market leadership by continuously focusing on operational efficiency, customer intimacy and relentless execution. Reference Data Management    As multi-national companies with a presence in multiple industry categories, market segments, and geographies, their ability to proactively manage changes and harness them to align their front office with back-office operations and performance management initiatives is critical to make the proverbial elephant dance. Managing reference data including types and codes, business taxonomies, complex relationships as well as mappings represent a key component of the broader agenda for enabling flexibility and agility, without sacrificing enterprise-level consistency, regulatory compliance and control. Financial Transformation  Periodically, companies find that processes implemented a decade or more ago no longer mirror the way of doing business and seek to proactively transform how they operate their business and underlying processes. Financial transformation often begins with the redesign of one’s chart of accounts. The ability to model and redesign one’s chart of accounts collaboratively, quickly validate against historical transaction bases and secure business buy-in across multiple line of business stakeholders, while continuing to manage changes within the legacy general ledger systems and downstream analytical applications while piloting the in-flight transformation can mean the difference between controlled success and project failure. Attend the session titled CON8275 - Oracle Hyperion Data Relationship Management: Enabling Enterprise Transformation at Oracle Openworld on Monday, October 1, 2012 at 4:45pm in Ballroom A of the InterContinental Hotel to learn how Oracle’s Data Relationship Management solution can help you stay ahead of the competition and proactively harness master (and reference) data changes to transform your enterprise. Hear in-depth customer testimonials from GE Healthcare and Old Mutual South Africa to learn how others have harnessed this technology effectively to build enduring competitive advantage through business process innovation and investments in master data governance. Hear GE Healthcare discuss how DRM has enabled financial transformation, ERP consolidation, mergers and acquisitions, and the alignment reference data across financial and management reporting applications. Also, learn how Old Mutual SA has upgraded to EBS R12 Financials and is transforming the management of chart of accounts for corporate reporting. Separately, an esteemed panel of DRM customers including Cisco Systems, Nationwide Insurance, Ralcorp Holdings and Mentor Graphics will discuss their perspectives on how DRM has helped them address business challenges associated with enterprise MDM including major change management initiatives including financial transformations, corporate restructuring, mergers & acquisitions, and the rationalization of financial and analytical master reference data to support alternate business perspectives for the alignment of EPM/BI initiatives. Attend the session titled CON9377 - Customer Showcase: Success with Oracle Hyperion Data Relationship Management at Openworld on Thursday, October 4, 2012 at 12:45pm in Ballroom of the InterContinental Hotel to interact with our esteemed speakers first hand.

    Read the article

  • Linkedin recommendations, useful? [closed]

    - by scottyab
    Linkedin has been around a few years now and while not everyone is keen on it. Chatting to a few people they viewed Linkedin recommendations as mutual back starching where poor candidates can recommend each other and and provide little value. I take a honest approach when recommending colleagues, trying to note specifics that person has performed well at and don't recommend colleagues on linkedin who wouldn't recommend in real life. What regard do people hold the Linkedin recommendations? would it effect your hiring decision?

    Read the article

  • Is there an SSL equivelent to an ssh agent?

    - by Matthew J Morrison
    Here is my situation: There are a number of developers who all need to have access to be able to install ruby gems and python eggs from a remote source. Currently, we have a server inside our firewall that hosts the gems and eggs. We now want the ability to be able to install things hosted on that server outside of our firewall. Since some of the gems and eggs that we host are proprietary I would like to somewhat lock access to that machine down, as unobtrusively as possible to the developers. My first thought was using something like ssh keys. So, I spent some time looking at SSL mutual authentication. I was able to get everything set up and working correctly, testing with curl, but the unfortunate thing was that I had to pass extra arguments to curl so it knows about the certificate, key and certificate authority. I was wondering if there is anything like the ssh agent that I can set up to provide that information automatically so that I can push the certificates and keys to the developer's machines so the developers don't have to log in or provide keys each time they try to install something. Another thing that I want to avoid is having to modify the 'gem' command and the 'pip' command to provide keys when they make the http connection. Any other suggestions that may solve this problem (not related to ssl mutual auth) are also welcome. EDIT: I've been continuing to research this and I came across stunnel. I think this may be what I'm looking for, any feedback regarding stunnel would also be great!

    Read the article

  • Multi-module maven build : different result from parent and from module

    - by Albaku
    I am migrating an application from ant build to maven 3 build. This app is composed by : A parent project specifying all the modules to build A project generating classes with jaxb and building a jar with them A project building an ejb project 3 projects building war modules 1 project building an ear Here is an extract from my parent pom : <groupId>com.test</groupId> <artifactId>P</artifactId> <packaging>pom</packaging> <version>04.01.00</version> <modules> <module>../PValidationJaxb</module> <-- jar <module>../PValidation</module> <-- ejb <module>../PImport</module> <-- war <module>../PTerminal</module> <-- war <module>../PWebService</module> <-- war <module>../PEAR</module> <-- ear </modules> I have several problems which I think have the same origin, probably a dependency management issue that I cannot figure out : The generated modules are different depending on if I build from the parent pom or a single module. Typically if I build PImport only, the generated war is similar to what I had with my ant build and if I build from the parent pom, my war took 20MB, a lot of dependencies from other modules had been added. Both wars are running well. My project PWebService has unit tests to be executed during the build. It is using mock-ejb which has cglib as dependency. Having a problem of ClassNotFound with this one, I had to exclude it and add a dependency to cglib-nodep (see last pom extract). If I then build only this module, it is working well. But if I build from the parent project, it fails because other dependencies in other modules also had an implicit dependency on cglib. I had to exclude it in every modules pom and add the dependency to cglib-nodep everywhere to make it run. Do I miss something important in my configuration ? The PValidation pom extract : It is creating a jar containing an ejb with interfaces generated by xdoclet, as well as a client jar. <parent> <groupId>com.test</groupId> <artifactId>P</artifactId> <version>04.01.00</version> </parent> <artifactId>P-validation</artifactId> <packaging>ejb</packaging> <dependencies> <dependency> <groupId>com.test</groupId> <artifactId>P-jaxb</artifactId> <version>${project.version}</version> </dependency> <dependency> <groupId>org.hibernate</groupId> <artifactId>hibernate</artifactId> <version>3.2.5.ga</version> <exclusions> <exclusion> <groupId>cglib</groupId> <artifactId>cglib</artifactId> </exclusion> </exclusions> </dependency> <dependency> <groupId>cglib</groupId> <artifactId>cglib-nodep</artifactId> <version>2.2.2</version> </dependency> ... [other libs] ... </dependencies> <build> ... <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-ejb-plugin</artifactId> <configuration> <ejbVersion>2.0</ejbVersion> <generateClient>true</generateClient> </configuration> </plugin> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>xdoclet-maven-plugin</artifactId> ... The PImport pom extract : It depends on both Jaxb generated jar and the ejb client jar. <parent> <groupId>com.test</groupId> <artifactId>P</artifactId> <version>04.01.00</version> </parent> <artifactId>P-import</artifactId> <packaging>war</packaging> <dependencies> <dependency> <groupId>com.test</groupId> <artifactId>P-jaxb</artifactId> <version>${project.version}</version> </dependency> <dependency> <groupId>com.test</groupId> <artifactId>P-validation</artifactId> <version>${project.version}</version> <type>ejb-client</type> </dependency> ... [other libs] ... </dependencies> The PWebService pom extract : <parent> <groupId>com.test</groupId> <artifactId>P</artifactId> <version>04.01.00</version> </parent> <artifactId>P-webservice</artifactId> <packaging>war</packaging> <properties> <jersey.version>1.14</jersey.version> </properties> <dependencies> <dependency> <groupId>com.sun.jersey</groupId> <artifactId>jersey-servlet</artifactId> <version>${jersey.version}</version> </dependency> <dependency> <groupId>com.rte.etso</groupId> <artifactId>etso-validation</artifactId> <version>${project.version}</version> <type>ejb-client</type> </dependency> ... [other libs] ... <dependency> <groupId>org.mockejb</groupId> <artifactId>mockejb</artifactId> <version>0.6-beta2</version> <scope>test</scope> <exclusions> <exclusion> <groupId>cglib</groupId> <artifactId>cglib-full</artifactId> </exclusion> </exclusions> </dependency> <dependency> <groupId>cglib</groupId> <artifactId>cglib-nodep</artifactId> <version>2.2.2</version> <scope>test</scope> </dependency> </dependencies> Many thanks

    Read the article

  • Get mutually and non mutually existening Fields in same table in Two columns

    - by ranabra
    This is a question similar to another question I posted here but is a little different. I am trying to get a list of all instances of mutual and non-mutual existing Users. What I mean is that the returned result from the query will return a list of users along with their co-worker. It is similar to the question here, but the difference is that non mutual users will be returned too and with out the "duplicity" mutually existing users return in the list (See image below in-order simplify it all). I took the original answer from Thomas (Thanx again Thomas) Select D1.u_username, U1.Permission, U1.Grade, D1.f_username, U2.Permission, U2.Gradefrom tblDynamicUserList As D1    Join tblDynamicUserList As D2        On D2.u_username = D1.f_username            And D2.f_username = D1.u_username    Join tblUsers As U1        On U1.u_username = D1.u_username    Join tblUsers As U2        On U2.u_username = D2.u_username and after some several trials I commented out 2 lines (Below). The returned result are exactly as described in the beginning of this question, but with the "duplicity" returned by mutually existing users in the table. How can I eliminate this duplicity? Select D1.u_username, U1.Permission, U1.Grade, D1.f_username, U2.Permission, U2.Gradefrom tblDynamicUserList As D1    Join tblDynamicUserList As D2        On D2.u_username = D1.f_username            /* And D2.f_username = D1.u_username */    Join tblUsers As U1        On U1.u_username = D1.u_username    Join tblUsers As U2        On U2.u_username = D2.u_username /* WHERE D1.U_userName < D1.f_username */ *Screenshot that hopefully helps explain it all. Database is SQL 2005. Many thanx in advance

    Read the article

  • Why is Flash power-hungry?

    - by Doug
    Apple historically argued that Flash was power-hungry, which made it inappropriate for use on mobile devices. I always thought that was just bluster excusing Apple's exclusion of Flash support from their mobile devices. But now I see that Adobe acknowledges that Flash is a pig. Why is it a pig? Are there bad programming approaches (that can be explained in layman's terms) that make it so power-hungry?

    Read the article

  • Can I import an outline into powerpoint

    - by justintime
    I would like to create a bullet point powerpoint in a text file (or Word document) and then import it into Powerpoint 2007 Slide 1 title - bullet 1 - bullet 2 - subpoint - bullet 3 Slide 2 title - bullet 1 - bullet 2 I find that powerpoint makes one think too much about form to the exclusion of content. Answers of the form Don't use Powerpoint might not go down to well with our corporate presentation style police !

    Read the article

  • Exclude all hidden directories in UNIX find

    - by xRickerlx
    I'm doing a word search using the following command: find . -exec grep -q [some_word] '{}' \; -print -o -name .svn -prune -o -name .ssh -prune -o -name .boneyard -o -name log -prune -prune -o -name tmp -prune Is it possible to use a regex to exclude all hidden directories? Note: The current command traverses the entire tree from the current location and exclude those being pruned. The exclusion needs to work for any hidden directory regardless off location.

    Read the article

  • Got a table of people, who I want to link to each other, many-to-many, with the links being bidirect

    - by dflock
    Imagine you live in very simplified example land - and imagine that you've got a table of people in your MySQL database: create table person ( person_id int, name text ) select * from person; +-------------------------------+ | person_id | name | +-------------------------------+ | 1 | Alice | | 2 | Bob | | 3 | Carol | +-------------------------------+ and these people need to collaborate/work together, so you've got a link table which links one person record to another: create table person__person ( person__person_id int, person_id int, other_person_id int ) This setup means that links between people are uni-directional - i.e. Alice can link to Bob, without Bob linking to Alice and, even worse, Alice can link to Bob and Bob can link to Alice at the same time, in two separate link records. As these links represent working relationships, in the real world they're all two-way mutual relationships. The following are all possible in this setup: select * from person__person; +---------------------+-----------+--------------------+ | person__person_id | person_id | other_person_id | +---------------------+-----------+--------------------+ | 1 | 1 | 2 | | 2 | 2 | 1 | | 3 | 2 | 2 | | 4 | 3 | 1 | +---------------------+-----------+--------------------+ For example, with person__person_id = 4 above, when you view Carol's (person_id = 3) profile, you should see a relationship with Alice (person_id = 1) and when you view Alice's profile, you should see a relationship with Carol, even though the link goes the other way. I realize that I can do union and distinct queries and whatnot to present the relationships as mutual in the UI, but is there a better way? I've got a feeling that there is a better way, one where this issue would neatly melt away by setting up the database properly, but I can't see it. Anyone got a better idea?

    Read the article

  • Fixing up Visual Studio&rsquo;s gitignore , using IFix

    - by terje
    Originally posted on: http://geekswithblogs.net/terje/archive/2014/06/13/fixing-up-visual-studiorsquos-gitignore--using-ifix.aspxDownload tool Is there anything wrong with the built-in Visual Studio gitignore ???? Yes, there is !  First, some background: When you set up a git repo, it should be small and not contain anything not really needed.  One thing you should not have in your git repo is binary files. These binary files may come from two sources, one is the output files, in the bin and obj folders.  If you have a  gitignore file present, which you should always have (!!), these folders are excluded by the standard included file (the one included when you choose Team Explorer/Settings/GitIgnore – Add.) The other source are the packages folder coming from your NuGet setup.  You do use NuGet, right ?  Of course you do !  But, that gitignore file doesn’t have any exclude clause for those folders.  You have to add that manually.  (It will very probably be included in some upcoming update or release).  This is one thing that is missing from the built-in gitignore. To add those few lines is a no-brainer, you just include this: # NuGet Packages packages/* *.nupkg # Enable "build/" folder in the NuGet Packages folder since # NuGet packages use it for MSBuild targets. # This line needs to be after the ignore of the build folder # (and the packages folder if the line above has been uncommented) !packages/build/ Now, if you are like me, and you probably are, you add git repo’s faster than you can code, and you end up with a bunch of repo’s, and then start to wonder: Did I fix up those gitignore files, or did I forget it? The next thing you learn, for example by reading this blog post, is that the “standard” latest Visual Studio gitignore file exist at https://github.com/github/gitignore, and you locate it under the file name VisualStudio.gitignore.  Here you will find all the new stuff, for example, the exclusion of the roslyn ide folders was commited on May 24th.  So, you think, all is well, Visual Studio will use this file …..     I am very sorry, it won’t. Visual Studio comes with a gitignore file that is baked into the release, and that is by this time “very old”.  The one at github is the latest.  The included gitignore miss the exclusion of the nuget packages folder, it also miss a lot of new stuff, like the Roslyn stuff. So, how do you fix this ?  … note .. while we wait for the next version… You can manually update it for every single repo you create, which works, but it does get boring after a few times, doesn’t it ? IFix Enter IFix ,  install it from here. IFix is a command line utility (and the installer adds it to the system path, you might need to reboot), and one of the commands is gitignore If you run it from a directory, it will check and optionally fix all gitignores in all git repo’s in that folder or below.  So, start up by running it from your C:/<user>/source/repos folder. To run it in check mode – which will not change anything, just do a check: IFix  gitignore --check What it will do is to check if the gitignore file is present, and if it is, check if the packages folder has been excluded.  If you want to see those that are ok, add the --verbose command too.  The result may look like this: Fixing missing packages Let us fix a single repo by adding the missing packages structure,  using IFix --fix We first check, then fix, then check again to verify that the gitignore is correct, and that the “packages/” part has been added. If we open up the .gitignore, we see that the block shown below has been added to the end of the .gitignore file.   Comparing and fixing with latest standard Visual Studio gitignore (from github) Now, this tells you if you miss the nuget packages folder, but what about the latest gitignore from github ? You can check for this too, just add the option –merge (why this is named so will be clear later down) So, IFix gitignore --check –merge The result may come out like this  (sorry no colors, not got that far yet here): As you can see, one repo has the latest gitignore (test1), the others are missing either 57 or 150 lines.  IFix has three ways to fix this: --add --merge --replace The options work as follows: Add:  Used to add standard gitignore in the cases where a .gitignore file is missing, and only that, that means it won’t touch other existing gitignores. Merge: Used to merge in the missing lines from the standard into the gitignore file.  If gitignore file is missing, the whole standard will be added. Replace: Used to force a complete replacement of the existing gitignore with the standard one. The Add and Replace options can be used without Fix, which means they will actually do the action. If you combine with --check it will otherwise not touch any files, just do a verification.  So a Merge Check will  tell you if there is any difference between the local gitignore and the standard gitignore, a Compare in effect. When you do a Fix Merge it will combine the local gitignore with the standard, and add what is missing to the end of the local gitignore. It may mean some things may be doubled up if they are spelled a bit differently.  You might also see some extra comments added, but they do no harm. Init new repo with standard gitignore One cool thing is that with a new repo, or a repo that is missing its gitignore, you can grab the latest standard just by using either the Add or the Replace command, both will in effect do the same in this case. So, IFix gitignore --add will add it in, as in the complete example below, where we set up a new git repo and add in the latest standard gitignore: Notes The project is open sourced at github, and you can also report issues there.

    Read the article

  • Guest Post: Instantiate SharePoint Workflow On Item Deleted

    - by Brian Jackett
    In this post, guest author Lucas Eduardo Silva will walk you through the steps of instantiating a workflow using an item event receiver from a custom list.  The ItemDeleting event will require approval via the workflow. Foreword     As you may have read recently, I injured my right hand and have had it in a cast for the past 3 weeks.  Due to this I planned to reduce my blogging while my hand heals.  As luck would have it, I was actually approached by someone who asked if they could be a guest author on my blog.  I’ve never had a guest author, but considering my injury now seemed like as good a time as ever to try it out. About the Guest Author     Lucas Eduardo Silva (email) works for CPM Braxis, a sibling company to my employer Sogeti in the CapGemini family.  Lucas and I exchanged emails a few times after one of my  recent posts and continued into various topics.  When I posted that I had injured my hand, Lucas mentioned that he had a post idea that he would like to publish and asked if it could be published on my blog.  The below content is the result of that collaboration. The Problem     Lucas has a big problem.  He has a workflow that he wants to fire every time an item is deleted from a custom list. He has already created the association in the "item deleting event", but needs to approve the deletion but the workflow is finishing first. Lucas put an onWorkflowItemChanged wait for the change of status approval, but it is not being hit. The Solution Note: This solution assumes you have the Visual Studio Extensions for Windows SharePoint Services (VSeWSS) installed to access the SharePoint project templates within VIsual Studio. 1 - Create a workflow that will be activated by ItemEventReceiver. 2 - Create the list by Visual Studio clicking in File -> New -> Project. Select SharePoint, then List Definition. 3 - Select the type of document to be created. List, Document Library, Wiki, Tasks, etc.. 4 - Visual Studio creates the file ItemEventReceiver.cs with all possible events in a list. 5 – In the workflow project, open the workflow.xml and copy the ID. 6 - Uncomment the ItemDeleting and insert the following code by replacing the ID that you copied earlier.   //Cancel the Exclusion properties.Cancel = true;   //Activating Exclusion Workflow SPWorkflowManager workflowManager = properties.ListItem.Web.Site.WorkflowManager;   SPWorkflowAssociation wfAssociation = properties.ListItem.ParentList.WorkflowAssociations. GetAssociationByBaseID(new Guid("37b5aea8-792a-4ded-be25-d283d9fe1f9d"));   workflowManager.StartWorkflow(properties.ListItem, wfAssociation, wfAssociation.AssociationData, true);   properties.Status = SPEventReceiverStatus.CancelNoError;   7 - properties.Cancel cancels the event being activated and executes the code that is inside the event. In the example, it cancels the deletion of the item to start the workflow that will be active as an association list with the workflow ID. 8 - Create and deploy the workflow and the list for SharePoint. 9 - Create a list through the model that was created. 10 - Enable the workflow in the list and Congratulations! Every time you try to delete the item the workflow is activated. TIP: If you really want to delete the item after the workflow is done you will have to delete the item by the workflow.   this.workflowProperties.Site.AllowUnsafeUpdates = true; this.workflowProperties.Item.Delete(); this.workflowProperties.List.Update();   Conclusion     In this guest post Lucas took you through the steps of creating an item deletion approval workflow with an event receiver.  This was also the first time I’ve had a guest author on this blog.  Many thanks to Lucas for putting together this content and offering it.  I haven’t decided how I’d handle future guest authors, mostly because I don’t know if there are others who would want to submit content.  If you do have something that you would like to guest author on my blog feel free to drop me a line and we can discuss.  As a disclaimer, there are no guarantees that it will be published though.  For now enjoy Lucas’ post and look for my return to regular blogging soon.         -Frog Out   <Update 1> If you wish to contact Lucas you can reach him at [email protected] </Update 1>

    Read the article

  • Operating systems theory -- using minimum number of semaphores

    - by stackuser
    This situation is prone to deadlock of processes in an operating system and I'd like to solve it with the minimum of semaphores. Basically there are three cooperating processes that all read data from the same input device. Each process, when it gets the input device, must read two consecutive data. I want to use mutual exclusion to do this. Semaphores should be used to synchronize: P1: P2: P3: input(a1,a2) input (b1,b2) input(c1,c2) Y=a1+c1 W=b2+c2 Z=a2+b1 Print (X) X=Z-Y+W The declaration and initialization that I think would work here are: semaphore s=1 sa1 = 0, sa2 = 0, sb1 = 0, sb2 = 0, sc1 = 0, sc2 = 0 I'm sure that any kernel programmers that happen on this can knock this out in a minute or 2. Diagram of cooperating Processes and one input device: It seems like P1 and P2 would start something like: wait(s) input (a1/b1, a2/b2) signal(s)

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9  | Next Page >