Search Results

Search found 288 results on 12 pages for 'identifiers'.

Page 7/12 | < Previous Page | 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Select the first row in a join of two tables in one statement

    - by Oscar Cabrero
    hi i need to select only the first row from a query that joins tables A and B, on table B exist multiple records with same name. there are not identifiers in any of the two tables. i cannt change the scheme either because i do not own the DB TABLE A NAME TABLE B NAME DATA1 DATA2 Select Distinct A.NAME,B.DATA1,B.DATA2 From A Inner Join B on A.NAME = B.NAME this gives me NAME DATA1 DATA2 sameName 1 2 sameName 1 3 otherName 5 7 otherName 8 9 but i need to retrieve only one row per name NAME DATA1 DATA2 sameName 1 2 otherName 5 7 i was able to do this by adding the result into a temp table with a identity column and the select the Min Id per name. the problem here is that i require to do this in one single statement. this is a DB2 database thanks

    Read the article

  • customisable JSLint

    - by Don
    Hi, I'm looking for a tool that checks JS code, which can be integrated into a Maven build. I need a tool that will check for errors such as use of reserved words as identifiers trailing semi-colon, e.g. var obj = { a: 1, b, 2, } JSLint seems like a perfect candidate, but the problem is that it is too strict, because it also checks for various coding patterns which are (arguably) bad style, but do not actually generate errors in a browser. Examples of such issues include Disallow ++ and -- and Allow one var statement per function If possible, I would like the errors to fail the build, and I would like the other rules to only print warnings (or disable them completely). Obviously, I need the ability to specify which of the available rules I consider errors and which I consider warnings. Thanks, Don

    Read the article

  • a small question on GNUPlot

    - by Robert
    Dear Guys, I have used the following statement to get the GNUPlot plot a grah for me: plot "force.dat" using 1:2 title "Detroit" with lines, \ "force.dat" u 1:3 t "US Avergae" w linepoints and the "force.dat" looks like 2005 0 0 2006 104 51 2007 202 101 It draws nice graph for me.However,I don't like the X-axis,because it is labelled as 2005,2005.5,2006,2006.5,2007..etc. However,those are year identifiers,I only want the 2005,2006,2007 etc,how could I get rid of the 2005.5,2006.5 etc labels in my GNUPlot graph? Thank you very much for your ideas.

    Read the article

  • Python __setattr__ and __getattr__ for global scope?

    - by KT
    Suppose I need to create my own small DSL that would use Python to describe a certain data structure. E.g. I'd like to be able to write something like f(x) = some_stuff(a,b,c) and have Python, instead of complaining about undeclared identifiers or attempting to invoke the function some_stuff, convert it to a literal expression for my further convenience. It is possible to get a reasonable approximation to this by creating a class with properly redefined __getattr__ and __setattr__ methods and use it as follows: e = Expression() e.f[e.x] = e.some_stuff(e.a, e.b, e.c) It would be cool though, if it were possible to get rid of the annoying "e." prefixes and maybe even avoid the use of []. So I was wondering, is it possible to somehow temporarily "redefine" global name lookups and assignments? On a related note, maybe there are good packages for easily achieving such "quoting" functionality for Python expressions?

    Read the article

  • Visual C++ 2008 doesn't recognize Windows declared types

    - by David Thornley
    I have a program that doesn't seem to recognize declared types in the latest U3D software. There's a line typedef BOOL (WINAPI* GMI)(HMON, LPMONITORINFOEX); which gets the error: Error 1 error C2061: syntax error : identifier 'LPMONITORINFOEX' c:\Projects\U3D\Source\RTL\Platform\Common\Win32\IFXOSRender.cpp 28 and a line MONITORINFOEX miMon; which gets Error 5 error C2065: 'miMon' : undeclared identifier c:\Projects\U3D\Source\RTL\Platform\Common\Win32\IFXOSRender.cpp 49 Error 3 error C2065: 'MONITORINFOEX' : undeclared identifier c:\Projects\U3D\Source\RTL\Platform\Common\Win32\IFXOSRender.cpp 49 The program's first non-comment statement is #include <windows.h>, which includes winuser.h, which defines these identifiers. In Visual Studio, I can right-click on them and go to the definition (a typedef) and from the typedef to the struct. WINAPI is defined in WinDef.h, so that seems to be working. There are no redefinitions of LPMONITORINFOEX or MONITORINFOEX in any other file. So, how can this be happening, and what can I do about it?

    Read the article

  • Applescript: Tell by Variable dilemma

    - by Johnny Grass
    I would like to do this: tell application "Finder" to set appName to (application file id "com.google.Chrome") as text using terms from application appName tell application appName to get URL of active tab of first window end using terms from This doesn't work because "using terms from" requires an application name as a string constant. If i substitute this line: using terms from application appName with this one using terms from application "Google Chrome" it works. However I don't want to rely on the target machine having the application named "Google Chrome". Using the bundle identifiers seems safer. Is there a better way to do this?

    Read the article

  • Facebook Scrumptious sample app won't build

    - by Chase Roberts
    I did exactly what they do in the video. However, when I get to the scrumptious app and try to build/run it, mine fails. It says: "Parse Issue. Expected a type." Here are the two lines that it thinks are broken (located in the ACAccountStore.h): // Returns the account type object matching the account type identifier. See // ACAccountType.h for well known account type identifiers - (ACAccountType *)accountTypeWithAccountTypeIdentifier:(NSString *)typeIdentifier; // Returns the accounts matching a given account type. - (NSArray *)accountsWithAccountType:(ACAccountType *)accountType; Here is a link to the tutorial. I only didn't even make it two min in before I hit this wall. http://developers.facebook.com/docs/getting-started/facebook-sdk-for-ios/3.1/ I am running Xcode v4.4.1.

    Read the article

  • R: how to make a unique set of names from a vector of strings?

    - by Mike Dewar
    Hi, I have a vector of strings. Check out my vector, it's awesome: > awesome [1] "a" "b" "c" "d" "d" "e" "f" "f" I'd like to make a new vector that is the same length as awesome but where, if necessary, the strings have been uniqueified. For example, a valid output of my desired function would be > awesome.uniqueified [1] "a" "b" "c" "d.1" "d.2" "e" "f.1" "f.2" Is there an easy, R-thonic and beautiful way to do this? I should say my list in real life (it's not called awesome) contains 25000ish mircoarray probeset identifiers. I'm always nervous when I embark on writing little generic functions (which I'm sure I could do) as I'm sure some R guru has come across this problem in the past, nailed it with some incredible algorithm that doesn't even have to store more than half an element in the vector. I'm just not sure what they might have called it. Probably not uniqueify.

    Read the article

  • Plot multiple files

    - by LucaB
    Hi I'd like to make an animation that illustrates the positions of some agents I'm simulating under Linux. Basically, I have some files named file00001.dat, file00002.dat and so on. I have to generate "something" that get files in order, and output an animated gif, a dynamic graph or whatever, that simulates the moving reading data from file. I have control on the files, meaning that I can put identifiers or everything I want. How would you achieve that? What programs would you use?

    Read the article

  • How to very efficiently assign lat/long to city boundary described by shape ?

    - by watcherFR
    I have a huge shapefile of 36.000 non-overlapping polygones (city boundaries). I want to easily determine the polygone into which a given lat/long falls. What would the best way given that it must be extremely computationaly efficient ? I was thinking of creating a lookup table (tilex,tiley,polygone_id) where tilex and tiley are tile identifiers at zoom levels 21 or 22. Yes, the lack of precision of using tile numbers and a planar projection is acceptable in my application. I would rather not use postgres's GIS extension and am fine with a program that will run for 2 days to generate all the INSERT statements.

    Read the article

  • What is the benefits and drawbacks of using header files?

    - by vodkhang
    I had some experience on programming languages like Java, C#, Scala as well as some lower level programming language like C, C++, Objective - C. My observation is that low level languages separate out header files and implementation files while other higher level programming language never separate it out. They use some identifiers like public, private, protected to do the jobs of header files. I saw one benefit of using header file (in some book like Code Complete), they talk about that using header files, people can never look at our implementation file and it helps with encapsulation. A drawback is that it creates too many files for me. Sometimes, it looks like verbose. It is just my thought and I don't know if there are any other benefits and drawbacks that people ever see and work with header file This question may not relate directly to programming but I think that if I can understand better about programming to interface, design software.

    Read the article

  • Scala's tuple unwrapping nuance

    - by Paul Milovanov
    I've noticed the following behavior in scala when trying to unwrap tuples into vals: scala> val (A, B, C) = (1, 2, 3) <console>:5: error: not found: value A val (A, B, C) = (1, 2, 3) ^ <console>:5: error: not found: value B val (A, B, C) = (1, 2, 3) ^ <console>:5: error: not found: value C val (A, B, C) = (1, 2, 3) ^ scala> val (u, v, w) = (1, 2, 3) u: Int = 1 v: Int = 2 w: Int = 3 Is that because scala's pattern matching mechanism automatically presumes that all identifiers starting with capitals within patterns are constants, or is that due to some other reason? Thanks!

    Read the article

  • How to model a relationship that NHibernate (or Hibernate) doesn’t easily support

    - by MylesRip
    I have a situation in which the ideal relationship, I believe, would involve Value Object Inheritance. This is unfortunately not supported in NHibernate so any solution I come up with will be less than perfect. Let’s say that: “Item” entities have a “Location” that can be in one of multiple different formats. These formats are completely different with no overlapping fields. We will deal with each Location in the format that is provided in the data with no attempt to convert from one format to another. Each Item has exactly one Location. “SpecialItem” is a subtype of Item, however, that is unique in that it has exactly two Locations. “Group” entities aggregate Items. “LocationGroup” is as subtype of Group. LocationGroup also has a single Location that can be in any of the formats as described above. Although I’m interested in Items by Group, I’m also interested in being able to find all items with the same Location, regardless of which group they are in. I apologize for the number of stipulations listed above, but I’m afraid that simplifying it any further wouldn’t really reflect the difficulties of the situation. Here is how the above could be diagrammed: Mapping Dilemma Diagram: (http://www.freeimagehosting.net/uploads/592ad48b1a.jpg) (I tried placing the diagram inline, but Stack Overflow won't allow that until I have accumulated more points. I understand the reasoning behind it, but it is a bit inconvenient for now.) Hmmm... Apparently I can't have multiple links either. :-( Analyzing the above, I make the following observations: I treat Locations polymorphically, referring to the supertype rather than the subtype. Logically, Locations should be “Value Objects” rather than entities since it is meaningless to differentiate between two Location objects that have all the same values. Thus equality between Locations should be based on field comparisons, not identifiers. Also, value objects should be immutable and shared references should not be allowed. Using NHibernate (or Hibernate) one would typically map value objects using the “component” keyword which would cause the fields of the class to be mapped directly into the database table that represents the containing class. Put another way, there would not be a separate “Locations” table in the database (and Locations would therefore have no identifiers). NHibernate (or Hibernate) do not currently support inheritance for value objects. My choices as I see them are: Ignore the fact that Locations should be value objects and map them as entities. This would take care of the inheritance mapping issues since NHibernate supports entity inheritance. The downside is that I then have to deal with aliasing issues. (Meaning that if multiple objects share a reference to the same Location, then changing values for one object’s Location would cause the location to change for other objects that share the reference the same Location record.) I want to avoid this if possible. Another downside is that entities are typically compared by their IDs. This would mean that two Location objects would be considered not equal even if the values of all their fields are the same. This would be invalid and unacceptable from the business perspective. Flatten Locations into a single class so that there are no longer inheritance relationships for Locations. This would allow Locations to be treated as value objects which could easily be handled by using “component” mapping in NHibernate. The downside in this case would be that the domain model becomes weaker, more fragile and less maintainable. Do some “creative” mapping in the hbm files in order to force Location fields to be mapped into the containing entities’ tables without using the “component” keyword. This approach is described by Colin Jack here. My situation is more complicated than the one he describes due to the fact that SpecialItem has a second Location and the fact that a different entity, LocatedGroup, also has Locations. I could probably get it to work, but the mappings would be non-intuitive and therefore hard to understand and maintain by other developers in the future. Also, I suspect that these tricky mappings would likely not be possible using Fluent NHibernate so I would use the advantages of using that tool, at least in that situation. Surely others out there have run into similar situations. I’m hoping someone who has “been there, done that” can share some wisdom. :-) So here’s the question… Which approach should be preferred in this situation? Why?

    Read the article

  • RESTful web service, PUTting an unnamed resource?

    - by James L
    I have a back-end service that creates unique identifiers for resources. The general idea is that resources are saved and versioned, so you can perform: GET http://service/sales/targets/7818181919/latest or GET http://service/sales/targets/7818181919/4 for version 4, and so on. My question is about the most correct way to upload these resources in the first place. How about: PUT http://service/sales/targets/ returning 303 See other /service/sales/targets/ It seems a little wrong as you should PUT and GET from exactly the same place using a resource-oriented interface, but I can't think of a better option. Any ideas?

    Read the article

  • Load links as unselected in jQuery

    - by shummel7845
    I have sets of links with id and name attributes, id for individual identifiers and names for groupings. In my jQuery, I have a click function that manipulates the CSS based on what a user clicks on. But I want to start the freshly loaded page with some of the links disabled (no href attribute) and with a style applied. I tried this, but it didn't work. $(function() // Alias for $(document).ready() { var $links = $('li a'); $links.(function(){ $('a[name="beam"]').filter(function() { $(this).addClass('unavailable'); $(this).removeAttr('href'); }); }); ...... Ideas? (Beam is one of the groupings by name).

    Read the article

  • What to do with "Default Web Site" on a Japanese OS? (E.G: ??? Web ???)

    - by Mike Atlas
    I'm currently in the process of upgrading old II6 automation scripts that use the iisvdir tool to create/modify/update apps and virtual directories. The IIS6 "iisvdir" commands reference paths in IIS6 that are from the metabase, eg, "/W3SVC/1/ROOT/MyApp" - where 1 is the "Default" website. The command doesn't actually require the display name of the site to make changes to it. With an English OS, the "Default Web Site" site name is fine for usage in AppCmd, but if the OS is Japanese, it will be named: "??? Web ???" So how can I script AppCmd to refer to sites, vdirs and apps using language neutral identifiers to reference the "Default App Site"? (Disclosure: I only have a IIS7-English machine that I am working on currently, but I have both IIS6-English and IIS6-Japanese machines for testing my old scripts - so perhaps it really is just "Default Web Site" still on Win2k8-Japanese?)

    Read the article

  • Data Masking Pack 12.1.0.3 Certified with E-Business Suite 12.1.3

    - by Elke Phelps (Oracle Development)
    I'm pleased to announce the certification of the E-Business Suite 12.1.3 Data Masking Template for the Data Masking Pack with Enterprise Manager Cloud Control 12.1.0.3. You can use the Oracle Data Masking Pack with Oracle Enterprise Manager Grid Control 12c to scramble sensitive data in cloned E-Business Suite environments.     You may scramble data in E-Business Suite cloned environments with EM12.1.0.3 using the following template: E-Business Suite 12.1.3 Data Masking Template for Data Masking Pack with EM12c (Patch 18462641) What does data masking do in E-Business Suite environments? Application data masking does the following: De-identify the data:  Scramble identifiers of individuals, also known as personally identifiable information or PII.  Examples include information such as name, account, address, location, and driver's license number. Mask sensitive data:  Mask data that, if associated with personally identifiable information (PII), would cause privacy concerns.  Examples include compensation, health and employment information.   Maintain data validity:  Provide a fully functional application.  How can EBS customers use data masking? The Oracle E-Business Suite Template for Data Masking Pack can be used in situations where confidential or regulated data needs to be shared with other non-production users who need access to some of the original data, but not necessarily every table.  Examples of non-production users include internal application developers or external business partners such as offshore testing companies, suppliers or customers.  Due to data dependencies, scrambling E-Business Suite data is not a trivial task.  The data needs to be scrubbed in such a way that allows the application to continue to function. The template works with the Oracle Data Masking Pack and Oracle Enterprise Manager to obscure sensitive E-Business Suite information that is copied from production to non-production environments.  The Oracle E-Business Suite Template for Data Masking Pack is applied to a non-production environment with the Enterprise Manager Grid Control Data Masking Pack.  When applied, the Oracle E-Business Suite Template for Data Masking Pack will create an irreversibly scrambled version of your production database for development and testing. Is there a charge for this? Yes. You must purchase licenses for the Oracle Data Masking Pack to use the Oracle E-Business Suite 12.1.3 template. The Oracle E-Business Suite 12.1.3 Template for the Data Masking Pack is included with the Oracle Data Masking Pack license.  You can contact your Oracle account manager for more details about licensing. References Additional details and requirements are provided in the following My Oracle Support Note: Using Oracle E-Business Suite Release 12.1.3 Template for the Data Masking Pack with Oracle Enterprise Manager 12.1 Data Masking Tool (Note 1481916.1) Masking Sensitive Data in the Oracle Database Real Application Testing User's Guide 11g Release 2 (11.2) Related Articles Scrambling Sensitive Data in E-Business Suite E-Business Suite 12.1.3 Data Masking Certified with Enterprise Manager 12c

    Read the article

  • OWB 11gR2 &ndash; Degenerate Dimensions

    - by David Allan
    Ever wondered how to build degenerate dimensions in OWB and get the benefits of slowly changing dimensions and cube loading? Now its possible through some changes in 11gR2 to make the dimension and cube loading much more flexible. This will let you get the benefits of OWB's surrogate key handling and slowly changing dimension reference when loading the fact table and need degenerate dimensions (see Ralph Kimball's degenerate dimensions design tip). Here we will see how to use the cube operator to load slowly changing, regular and degenerate dimensions. The cube and cube operator can now work with dimensions which have no surrogate key as well as dimensions with surrogates, so you can get the benefit of the cube loading and incorporate the degenerate dimension loading. What you need to do is create a dimension in OWB that is purely used for ETL metadata; the dimension itself is never deployed (its table is, but has not data) it has no surrogate keys has a single level with a business attribute the degenerate dimension data and a dummy attribute, say description just to pass the OWB validation. When this degenerate dimension is added into a cube, you will need to configure the fact table created and set the 'Deployable' flag to FALSE for the foreign key generated to the degenerate dimension table. The degenerate dimension reference will then be in the cube operator and used when matching. Create the degenerate dimension using the regular wizard. Delete the Surrogate ID attribute, this is not needed. Define a level name for the dimension member (any name). After the wizard has completed, in the editor delete the hierarchy STANDARD that was automatically generated, there is only a single level, no need for a hierarchy and this shouldn't really be created. Deploy the implementing table DD_ORDERNUMBER_TAB, this needs to be deployed but with no data (the mapping here will do a left outer join of the source data with the empty degenerate dimension table). Now, go ahead and build your cube, use the regular TIMES dimension for example and your degenerate dimension DD_ORDERNUMBER, can add in SCD dimensions etc. Configure the fact table created and set Deployable to false, so the foreign key does not get generated. Can now use the cube in a mapping and load data into the fact table via the cube operator, this will look after surrogate lookups and slowly changing dimension references.   If you generate the SQL you will see the ON clause for matching includes the columns representing the degenerate dimension columns. Here we have seen how this use case for loading fact tables using degenerate dimensions becomes a whole lot simpler using OWB 11gR2. I'm sure there are other use cases where using this mix of dimensions with surrogate and regular identifiers is useful, Fact tables partitioned by date columns is another classic example that this will greatly help and make the cube operator much more useful. Good to hear any comments.

    Read the article

  • How to set the initial component focus

    - by frank.nimphius
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} In ADF Faces, you use the af:document tag's initialFocusId to define the initial component focus. For this, specify the id property value of the component that you want to put the initial focus on. Identifiers are relative to the component, and must account for NamingContainers. You can use a single colon to start the search from the root, or multiple colons to move up through the NamingContainers - "::" will pop out of the component's naming container and begin the search from there, ":::" will pop out of two naming containers and begin the search from there. Alternatively you can add the naming container IDs as a prefix to the component Id, e.g. nc1:nc2:comp1. http://download.oracle.com/docs/cd/E17904_01/apirefs.1111/e12419/tagdoc/af_document.html To set the initial focus to a component located in a page fragment that is exposed through an ADF region, keep in mind that ADF Faces regions - af:region - is a naming container too. To address an input text field with the id "it1" in an ADF region exposed by an af:region tag with the id r1, you use the following reference in af:document: <af:document id="d1" initialFocusId="r1:0:it1"> Note the "0" index in the client Id. Also, make sure the input text component has its clientComponent property set to true as otherwise no client component exist to put focus on.

    Read the article

  • In-Store Tracking Gets a Little Harder

    - by David Dorf
    Remember how Nordstrom was tracking shopper movements within their stores using the unique number, called a MAC, emitted by the WiFi radio in smartphones?  The phones didn't need to connect to the network, only have their WiFi enabled, as most people do by default.  They did this, presumably, to track shoppers' path to purchase and better understand traffic patterns.  Although there were signs explaining this at the entrances, people didn't like the notion of being tracked.  (Nevermind that there are cameras in the ceiling watching them.)  Nordstrom stopped the program. To address this concern the Future of Privacy, a Washington think tank, created Smart Store Privacy, a do-not-track service that allows consumers to register their MAC address in much the same way people register their phone numbers in the national do-not-call list.  A group of companies agreed to respect consumers' wishes and ignore smartphones listed in the database.  The database includes Bluetooth identifiers as well.  Of course you could simply turn your bluetooth and WiFi off when shopping as well. Most know that Apple prefers to use BLE beacons to contact and track smartphones within their stores.  This feature extends the typical online experience to also work in physical stores.  By identifying themselves, shoppers can expect a more tailored shopping experience much like what we've come to expect from Amazon's website, with product recommendations and offers that are (usually) relevant. But the upcoming release of iOS8 is purported to have a new feature that randomizes the WiFi MAC address of smartphones during the "probing" phase.  That is, before connecting to the WiFi network, a random MAC number is used so as to keep the smartphone's real MAC address secret.  Unless you actually connect to the store's WiFi, they won't recognize the MAC address. The details on this are still sketchy, but if the random MAC is consistent for a short period, retailers will still be able to track movements anonymously, but they won't recognize repeat visitors.  That may be sufficient for traffic analytics, but it will stymie target marketing.  In the case of marketing, using iBeacons with opt-in permission from consumers will be the way forward. There is always a battle between utility and privacy, so I expect many more changes in this area.  Incidentally, if you'd like to see where beacons are being used this site tracks them around the world.

    Read the article

  • cpan won't configure correctly on centos6, can't connect to internet

    - by dan
    I have Centos 6 setup and have installed perl-CPAN. When I run cpan it takes me through the setup and ends by telling me it can't connect to the internet and to enter a mirror. I enter a mirror, but it still can't install the package. What am I doing wrong? If you're accessing the net via proxies, you can specify them in the CPAN configuration or via environment variables. The variable in the $CPAN::Config takes precedence. <ftp_proxy> Your ftp_proxy? [] <http_proxy> Your http_proxy? [] <no_proxy> Your no_proxy? [] CPAN needs access to at least one CPAN mirror. As you did not allow me to connect to the internet you need to supply a valid CPAN URL now. Please enter the URL of your CPAN mirror CPAN needs access to at least one CPAN mirror. As you did not allow me to connect to the internet you need to supply a valid CPAN URL now. Please enter the URL of your CPAN mirror mirror.cc.columbia.edu::cpan Configuration does not allow connecting to the internet. Current set of CPAN URLs: mirror.cc.columbia.edu::cpan Enter another URL or RETURN to quit: [] New urllist mirror.cc.columbia.edu::cpan Please remember to call 'o conf commit' to make the config permanent! cpan shell -- CPAN exploration and modules installation (v1.9402) Enter 'h' for help. cpan[1]> install File::Stat CPAN: Storable loaded ok (v2.20) LWP not available Warning: no success downloading '/root/.cpan/sources/authors/01mailrc.txt.gz.tmp918'. Giving up on it. at /usr/share/perl5/CPAN/Index.pm line 225 ^CCaught SIGINT, trying to continue Warning: Cannot install File::Stat, don't know what it is. Try the command i /File::Stat/ to find objects with matching identifiers. cpan[2]>

    Read the article

  • To clone or to automate a system installation?

    - by Shtééf
    Let's say you're setting up a cluster of servers performing the same task. Or say you're just setting up a bunch of different servers, but you expect to use a base configuration on all of your servers. Would it be better practice to create a base image and clone it, or to automate the installation and configuration? I occasionally end up in this argument with my boss, in situations where we're time-pressed. When he sees me struggle with perfecting the automation, his suggestion is often to clone the entire disk to the other machines. But my instinct has always been to avoid cloning. This is mostly from an Ubuntu perspective, but the question is fairly general. My reasons for avoiding cloning are: On a typical install, even if it's fresh, there are already several unique identifiers installed: filesystem UUIDs, SSH host keys, among others. These would have to be regenerated. Network needs to be reconfigured for each clone. This would need to be done off-line, of course, or the settings will conflict with other machines on the network. On the other hand, some of the cloning advantages are quite clear as well: (Initially?) less effort required than automating configuration. Tools exist to quickly address (some) of the above disadvantages. (I can see right through my own bias there.)

    Read the article

  • How to multiseat with HW 3d accel on CentOS 6.3 Final?

    - by user35070
    I would like to setup a multiseat configuration on CentOS 6.3 (two video cards, two keyboards, two mice, two monitors) and have hardware accelerated 3D on both monitors. 3D HW acceleration rules out Xephyr. I saw somewhere that recent versions of GDM (3.3 and newer?) don't support multiseat, so do I have to install KDM to make this work? If I just create a duplicate section with new device identifiers in my xorg.conf file, will this 'just work'? Using different ports on the same video card and separate keyboards, mice, and displays, the result was a desktop which spanned both monitors with both keyboards and mice acting as the same input in the GUI. I will power down and put in the new video card and report on the results soon. Both video cards are nvidia. UPDATE after putting in another NVIDIA video card, default behavior (before changing xorg.conf) is that one screen works normally, and both mice and keyboards are connected to it. Changing xorg.conf and the display manager to KDM and following the directions here https://help.ubuntu.com/community/MultiseatX#Ubuntu_10.04_.28Lucid.29 , I have 2 mirrored screens connected to separate video cards, DRI enabled, and 2 mice both connected to the same pointer. Keyboards don't do anything, however, I probably just need to fix a setting in xorg.conf I would still like to get multiseat functionality, eg. separate screens with separate input devices I have verified that the separate X processes are running (see page above) using 'ps aux | grepX [01]'

    Read the article

  • Debian/Redmine: Upgrade multiple instances at once

    - by Davey
    I have multiple Redmine instances. Let's call them InstanceA and InstanceB. InstanceA and InstanceB share the same Redmine installation on Debian. Suppose I would want to install Redmine 1.3 on both instances, how would I do that? After upgrading the core files I would have to migrate the databases. What I would like to know is: can I migrate all databases in a single action? Normally I would do something like: rake -s db:migrate RAILS_ENV=production X_DEBIAN_SITEID=InstanceA for each instance, but this would get tedious if you have 50+ instances. Thanks in advance! Edit: The README.Debian file that's in the (Debian) Redmine package states: SUPPORTS SETUP AND UPGRADES OF MULTIPLE DATABASE INSTANCES This redmine package is designed to automatically configure database BUT NOT the web server. The default database instance is called "default". A debconf facility is provided for configuring several redmine instances. Use dpkg-reconfigure to define the instances identifiers. But can't figure out what to do with the "debconf facility". Edit2: My environment is a default Debian 6.0 "Squeeze" installation with a default Redmine (aptitude install redmine) installation on a default libapache2-mod-passenger. I have setup two instances with dpkg-reconfigure redmine.

    Read the article

  • Routing a single request through multiple nginx backend apps

    - by Jonathan Oliver
    I wanted to get an idea if anything like the following scenario was possible: Nginx handles a request and routes it to some kind of authentication application where cookies and/or other kinds of security identifiers are interpreted and verified. The app perhaps makes a few additions to the request (appending authenticated headers). Failing authentication returns an HTTP 401. Nginx then takes the request and routes it through an authorization application which determines, based upon identity and the HTTP verb (put, delete, get, etc.) and URL in question, whether the actor/agent/user has permission to performed the intended action. Perhaps the authorization application modifies the request somewhat by appending another header, for example. Failing authorization returns 403. (Wash, rinse, repeat the proxy pattern for any number of services that want to participate in the request in some fashion.) Finally, Nginx routes the request into the actual application code where the request is inspected and the requested operations are executed according to the URL in question and where the identity of the user can be captured and understood by the application by looking at the altered HTTP request. Ideally, Nginx could do this natively or with a plugin. Any ideas? The alternative that I've considered is having Nginx hand off the initial request to the authentication application and then have this application proxy the request back through to Nginx (whether on the same box or another box). I know there are a number of applications frameworks (Django, RoR, etc.) that can do a lot of this stuff "in process", but I was trying to make things a little more generic and self contained where different applications could "hook" the HTTP pipeline of Nginx and then participate in, short circuit, and even modify the request accordingly. If Nginx can't do this, is anyone aware of other web servers that will perform in the manner described above?

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12  | Next Page >