Search Results

Search found 21777 results on 872 pages for 'howard may'.

Page 403/872 | < Previous Page | 399 400 401 402 403 404 405 406 407 408 409 410  | Next Page >

  • Generating Deep Arrays: Shallow to Deep, Deep to Shallow or Bad idea?

    - by MobyD
    I'm working on an array structure that will be used as the data source for a report template in a web app. The data comes from relatively complex SQL queries that return one or many rows as one dimensional associative arrays. In the case of many, they are turned into two dimensional indexed array. The data is complex and in some cases there is a lot of it. To save trips to the database (which are extremely expensive in this scenario) I'm attempting to get all of the basic arrays (1 and 2 dimension raw database data) and put them, conditionally, into a single, five level deep array. Organizing the data in PHP seems like a better idea than by using where statements in the SQL. Array Structure Array of years( year => array of types( types => array of information( total => value, table => array of data( index => db array ) ) ) ) My first question is, is this a bad idea. Are arrays like this appropriate for this situation? If this would work, how should I go about populating it? My initial thought was shallow to deep, but the more I work on this, the more I realize that it'd be very difficult to abstract out the conditionals that determine where each item goes in the array. So it seems that starting from the most deeply nested data may be the approach I should take. If this is array abuse, what alternatives exist?

    Read the article

  • cannot access ubuntu 12.04 SAMBA share from windows 7 using hostname

    - by user98398
    I've been trying for days to get this working, and everywhere I look online it seems no one has a definitive answer, so here is the run down: I have an external drive attached to my ubuntu 12.04 machine, "nicholas-desktop." I have the entire drive shared over the network via SAMBA. If I try to access the drive from windows 7 by using "\nicholas-desktop" it fails saying it cannot locate "nicholas-desktop." However, if I use the current IP address assigned to my machine by my router's DHCP server by typing "\192.168.2.XXX" I have no problems accessing the share. if I try to ping my ubuntu machine's hostname from windows it fails. The same happens if I try to ping my windows machine, "nicholas-laptop" from my ubuntu machine. Again, if I use either machine's assigned IP address it works fine. Can someone please help me get this working? I don't want any workarounds like setting a static IP, or DHCP reservation, I want to be able to resolve hostnames from both sides. I have tried enabling SAMBA'a WINS server so I could resolve the hostnames using netBIOS, however that didn't work either, I may have made a mistake setting it up though. Thanks for your time, NCB

    Read the article

  • Why are my 32bit OpenGL libraries pointing to mesa instead of nvidia, and how do I fix it?

    - by Codemonkey
    I have installed Nvidia's drivers on my Ubuntu 13 system, but according to this command (ldconfig -p | grep GL): $ ldconfig -p | grep GL libQtOpenGL.so.4 (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libQtOpenGL.so.4 libGLU.so.1 (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libGLU.so.1 libGLEWmx.so.1.8 (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libGLEWmx.so.1.8 libGLEW.so.1.8 (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libGLEW.so.1.8 libGLESv2.so.2 (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/mesa-egl/libGLESv2.so.2 libGL.so.1 (libc6,x86-64) => /usr/lib/libGL.so.1 libGL.so.1 (libc6) => /usr/lib/i386-linux-gnu/mesa/libGL.so.1 libGL.so (libc6,x86-64) => /usr/lib/libGL.so libEGL.so.1 (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/mesa-egl/libEGL.so.1 The 32bit version of OpenGL is pointing to mesa's libraries instead of nvidia. This causes my Steam games to refuse to launch with the error: Could not find required OpenGL entry point 'glGetError'! Either your video card is unsupported, or your OpenGL driver needs to be updated. Why is this the case? When the nvidia installer asked me if I wanted to install "32bit compatability libraries" (or something like that) I chose yes. How do I fix this? Edit: I just reinstalled the same Nvidia driver, and that apparently removed the 32bit OpenGL driver completely: $ ldconfig -p | grep libGL.so libGL.so.1 (libc6,x86-64) => /usr/lib/libGL.so.1 libGL.so (libc6,x86-64) => /usr/lib/libGL.so Now Steam won't start: You are missing the following 32-bit libraries, and Steam may not run: libGL.so.1 Again, I chose YES when the installer asked me if I wanted to install 32bit libraries. Why are they not installed!?

    Read the article

  • Windows Update cannot currently check for updates, because the service is not running

    - by Lee
    This morning I attempted to run Windows Update on two of my Windows 7 PCs (both are virtual machines), and I ran into this interesting pop-up error message. I have never encountered this problem before, so I was somewhat perplexed. From the message, my first thought was to see if the Windows Update service was running. It was. As usual, the solution is never so simple. I attempted to restart the service and reboot the PCs to no avail. So, I am off to the interwebs for a solution. I did find a solution to the problem, so I thought to post it for my future reference and for anyone else who may encounter this problem. I will be posting the answer shortly. If you have alternate solutions that have worked for you, please feel free to leave a post or comment.

    Read the article

  • Google authorship verification issue

    - by Fraser
    I'm trying to get my blog content author verified so my face gets into the Google search results. I managed to achieve this a few weeks back - When testing my content in the Google authorship testing tool it reported that I had been verified and I could see my mug in the results. All I had to do was wait a couple of weeks before I started popping up in the search results (I think(?)). However, I seem to have thrown a spanner in the works. I set up Google apps for my domain and merged my old Google+ profile into my google apps account. This seemed to reset my Google+ profile (no biggy, since it was a new profile and only had 1 connection). I re-set up my G+ account and tied it all in to my blog and it's content. I am now seeing some very strange behaviour. If you take a look at one of my blog posts through the snippet testing tool: http://www.google.com/webmasters/tools/richsnippets?url=http%3A%2F%2Fblog.fraser-hart.co.uk%2Fjquery-fullscreen-background-slideshow%2F&html= You will see that it is not recognising me as an author. However, when you enter my profile URL (https://plus.google.com/108765138229426105004) into the "Authorship verification by email" input, you will see that it does in fact recognise it as verified. Now, if you try and verify the same page again, it reverts back to unverified. I thought I may have to just wait it out but this has been over a week now and previously (before I merged my profile) it happened instantaneously. Has anyone experienced this bizarre behaviour before? What is happening here? More importantly, is there anything I can do to resolve it? (Apologies for the long and boring question). Cheers!

    Read the article

  • Splitting big request in multiple small ajax requests

    - by Ionut
    I am unsure regarding the scalability of the following model. I have no experience at all with large systems, big number of requests and so on but I'm trying to build some features considering scalability first. In my scenario there is a user page which contains data for: User's details (name, location, workplace ...) User's activity (blog posts, comments...) User statistics (rating, number of friends...) In order to show all this on the same page, for a request there will be at least 3 different database queries on the back-end. In some cases, I imagine that those queries will be running quite a wile, therefore the user experience may suffer while waiting between requests. This is why I decided to run only step 1 (User's details) as a normal request. After the response is received, two ajax requests are sent for steps 2 and 3. When those responses are received, I only place the data in the destined wrappers. For me at least this makes more sense. However there are 3 requests instead of one for every user page view. Will this affect the system on the long term? I'm assuming that this kind of approach requires more resources but is this trade of UX for resources a good dial or should I stick to one plain big request?

    Read the article

  • Including sender email address when forwarding emails with Outlook 2007

    - by Roee Adler
    When forwarding an email in Outlook (I have 2007), the header of the previous email shows. Sometimes it may show as follows: From: Joe Shmoe Sent: Saturday, June 12, 2010 10:01 PM To: Roee Adler Subject: Following our previous conversation Other times it will include the actual email address of the sender of the previous mail: From: Sponge Bob [mailto:[email protected]] Sent: Saturday, June 12, 2010 2:26 PM To: Roee Adler Subject: Sponges and other stuff How do I force every forwarded email to include the mail address? When forwarding from my iPhone it constantly keeps the address just the way I want it, but from Outlook it seems to depend on whether the sender is a contact of mine or not. The reason I need this is for 37signals' Highrise CRM system.

    Read the article

  • Would I be able to use code hosting services to host malware code?

    - by NlightNFotis
    Let me start by saying that I am a computer security researcher. Part of my job is to create malware to deploy on a controlled environment in order to study or evaluate several aspects of computer security. Now, I am starting to think that using an online code hosting service (such as BitBucket, Github, etc...) to have all my code in 1 place, would allow me to work on my projects more efficiently. My question is: Are there any issues with this? I have studied those companies' privacy policies, and they state that they allow usage of their services for lawful usage. Since I am not distributing malware, but I am only using it on my machines and machines that I am authorized to use, aren't I allowed to use the service? For the usage that I am doing, malware is the same as any other software. I recognise that I should be extremely careful with code hosting, as any mistake from my part could hold me liable for damages and leave me open against legal action. As such I am recognizing that I should use private repositories, so the code is not available to the public. But how private is a private repository? How can I trust that companies like them will not leak or sell a potential (electronic) viral weaponry that I may have created in the future?

    Read the article

  • How many hours of use before I need to clean a tape drive?

    - by codeape
    I do backups to a HP Ultrium 2 tape drive (HP StorageWorks Ultrium 448). The drive has a 'Clean' LED that supposedly will light up or blink when the drive needs to be cleaned. The drive has been in use since october 2005, and still the 'Clean' light has never been lit. The drive statistics are: Total hours in use: 1603 Total bytes written: 19.7 TB Total bytes read: 19.3 TB My question is: How many hours of use can I expect before I need to clean the drive? Edit: I have not encountered any errors using the drive. I do restore tests every two months, and every backup is verified. Edit 2: The user manual says: "HP StorageWorks Ultrium tape drives do not require regular cleaning. An Ultrium universal cleaning cartridge should only be used when the orange Clean LED is flashing." Update: It is now May 2010 (4.5 years of use), and the LED is still off, I have not cleaned, backups verify and regular restore tests are done.

    Read the article

  • We're Subversion Geeks and we want to know the benefits of Mercurial

    - by Matt
    Having read I'm a Subversion geek, why should I consider or not consider Mercurial or Git or any other DVCS. I have a related follow up question. I read that question and read the recommended links and videos and I see the benefits but I don't see the overall mindshift people are talking about. Our team is of 8-10 developers that work on one large code base consisting of 60 projects. We use Subversion and have a main trunk. When a developer starts a new Fogbugz case they create a svn branch, do the work on the branch and when they're done they merge back to the trunk. Occasionally they may stay on the branch for an extended time and merge the trunk to the branch to pick up the changes. When I watched Linus talk about people creating a branch and never doing it again, that's not us at all. We create probably 50-100 branches a week without issue. The biggest challenge is the merging but we've gotten pretty good at that as well. I tend to merge by fogbugz case & checkin rather than the entire root of the branch. We never work remotely and we never make branches off of branches. If you're the only one working in that section of the code base then the merge to the trunk goes smoothly. If someone else had modified the same section of code then the merge can get messy and you might need to do some surgery. Conflicts are conflicts, I don't see how any system could get it right most of the time unless if was smart enough to understand the code. After creating a branch the following checkout of 60k+ files takes some time but that would be an issue with any source control system we'd use. Is there some benefit of any DVCS that we're not seeing that would be of great help to us?

    Read the article

  • Desktop Fun: Triple Monitor Wallpaper Collection Series 2

    - by Asian Angel
    Recently we shared the first batch in a series of wallpaper collections focused exclusively on triple monitor setups with you. Today we have our second offering in the series filled with all new wallpaper goodness to help make your monitors a joy to look at once again. Note: Click on the picture to see the full-size image—these wallpapers vary in size so you may need to crop, stretch, or place them on a colored background in order to best match them to your screen’s resolution. Special Notes Regarding This Collection: The website lists the following resolutions as available for backgrounds: 3072*768, 3456*864, 3840*800, 3840*960, 3840*1024, 4080*768, 4098*768, 4320*900, 4800*900, 4800*1200, 5040*1050, 5760*1080, 5760*1200, and 7680*1600. Keep in mind that the largest image size we were able to download was 5120*1600 pixels even though “5760*1080, 5760*1200, and 7680*1600″ were listed. Use the “Click here to change resolution preferences” link at the top of each page to select the size best suited to your monitors before downloading. The easiest way to save these images is to right click on the previews and select “Save As”. Latest Features How-To Geek ETC Learn To Adjust Contrast Like a Pro in Photoshop, GIMP, and Paint.NET Have You Ever Wondered How Your Operating System Got Its Name? Should You Delete Windows 7 Service Pack Backup Files to Save Space? What Can Super Mario Teach Us About Graphics Technology? Windows 7 Service Pack 1 is Released: But Should You Install It? How To Make Hundreds of Complex Photo Edits in Seconds With Photoshop Actions Sync Your Windows Computer with Your Ubuntu One Account [Desktop Client] Awesome 10 Meter Curved Touchscreen at the University of Groningen [Video] TV Antenna Helper Makes HDTV Antenna Calibration a Snap Turn a Green Laser into a Microscope Projector [Science] The Open Road Awaits [Wallpaper] N64oid Brings N64 Emulation to Android Devices

    Read the article

  • What is it that automatically checks config changes (such as those in /etc) into git?

    - by Brandon
    I remember reading on the ubuntu forums some time ago about a program to automatically check configuration changes into version control for you. It was (of course) not Ubuntu-specific. I'm pretty sure it used git, though it may have been svn, or perhaps even able to work with multiple different VCSs. My Googling has turned up nothing, and I'd rather not roll my own script if someone has already done this well. Of course I could just manually check things in, but there are reasons I'd like it done automatically. (I'm actually planning to use this for my LastSession.plist file for Safari, so when the #@$%^*&! thing crashes, and I don't restore everything, and then Leopard crashes, the fact that it has such lousy session management won't mean I lose the dozens of windows with dozens of tabs I had open.)

    Read the article

  • "Recipient address rejected" when sending an email to an external address with sendgrid

    - by WJB
    In postfix, I'm using relay_host to send an email to an external address using sendgrid, but I get an error about local ricipient table when sending an email from my PHP code. This is my main.cf in /postfix/ ## -- Sendgrid smtp_sasl_auth_enable = yes smtp_sasl_password_maps = static:username:password smtp_sasl_security_options = noanonymous smtp_tls_security_level = may header_size_limit = 4096000 relayhost = [smtp.sendgrid.net]:587 This is the error message from the log: postfix/smtpd[53598]: [ID 197553 mail.info] NOQUEUE: reject: RCPT from localhost[127.0.0.1]: 550 5.1.1 Recipient address rejected: User unknown in local recipient table; from=<[email protected]> to=<[email protected]> proto=ESMTP helo=<localhost.localdomain> One interesting thing is when I use "sendmail [email protected]" from the command line, the email is delivered successfully using SendGrid. I think it's because this uses postfix/smtp instead of postfix/smtpD the log for this says, postfix/smtp[18670]: [ID 197553 mail.info] AAF7313A7E: to=, relay=smtp.sendgrid.net[50.97.69.148]:587, delay=4.1, delays=3.5/0.02/0.44/0.18, dsn=2.0.0, status=sent (250 Delivery in progress) Thank you

    Read the article

  • Preventing Users From Copying Text From and Pasting It Into TextBoxes

    Many websites that support user accounts require users to enter an email address as part of the registration process. This email address is then used as the primary communication channel with the user. For instance, if the user forgets her password a new one can be generated and emailed to the address on file. But what if, when registering, a user enters an incorrect email address? Perhaps the user meant to enter [email protected], but accidentally transposed the first two letters, entering [email protected]. How can such typos be prevented? The only foolproof way to ensure that the user's entered email address is valid is to send them a validation email upon registering that includes a link that, when visited, activates their account. (This technique is discussed in detail in Examining ASP.NET's Membership, Roles, and Profile - Part 11.) The downside to using a validation email is that it adds one more step to the registration process, which will cause some people to bail out on the registration process. A simpler approach to lessening email entry errors is to have the user enter their email address twice, just like how most registration forms prompt users to enter their password twice. In fact, you may have seen registration pages that do just this. However, when I encounter such a registration page I usually avoid entering the email address twice, but instead enter it once and then copy and paste it from the first textbox into the second. This behavior circumvents the purpose of the two textboxes - any typo entered into the first textbox will be copied into the second. Using a bit of JavaScript it is possible to prevent most users from copying text from one textbox and pasting it into another, thereby requiring the user to type their email address into both textboxes. This article shows how to disable cut and paste between textboxes on a web page using the free jQuery library. Read on to learn more! Read More >

    Read the article

  • When does the "Do One Thing" paradigm become harmful?

    - by Petr
    For the sake of argument here's a sample function that prints contents of a given file line-by-line. Version 1: void printFile(const string & filePath) { fstream file(filePath, ios::in); string line; while (file.good()) { getline(file, line); cout << line << endl; } } I know it is recommended that functions do one thing at one level of abstraction. To me, though code above does pretty much one thing and is fairly atomic. Some books (such as Robert C. Martin's Clean Code) seem to suggest breaking the above code into separate functions. Version 2: void printLine(const string & line) { cout << line << endl; } void printLines(fstream & file) { string line; while (file.good()) { getline(file, line); printLine(line); } } void printFile(const string & filePath) { fstream file(filePath, ios::in); printLines(file); } I understand what they want to achieve (open file / read lines / print line), but isn't it a bit of overkill? The original version is simple and in some sense already does one thing - prints a file. The second version will lead to a large number of really small functions which may be far less legible than the first version. Wouldn't it be, in this case, better to have the code at one place? At which point does the "Do One Thing" paradigm become harmful?

    Read the article

  • Run a script with Apache2 in a certain directory

    - by TheGatorade
    I am trying to run WebMCP on an Apache2 server. It's got 2 executable files, which I have in /opt/webmcp/cgi-bin/webmcp.lua and /opt/webmcp/cgi-bin/webmcp-wrapper.lua If I run the wrapper from a position that is not /opt/webmcp/cgi-bin it says it cannot find webmcp.lua and gives a 500 error. If I run it from the correct directory it works. My server has webmcp.lua set as directoryindex and it's giving 500 error. It may be because of this problem? /opt/webmcp/cgi-bin/ is already set as documentroot, and is accessible by www-data.

    Read the article

  • Variant Management– Which Approach fits for my Product?

    - by C. Chadwick
    Jürgen Kunz – Director Product Development – Oracle ORACLE Deutschland B.V. & Co. KG Introduction In a difficult economic environment, it is important for companies to understand the customer requirements in detail and to address them in their products. Customer specific products, however, usually cause increased costs. Variant management helps to find the best combination of standard components and custom components which balances customer’s product requirements and product costs. Depending on the type of product, different approaches to variant management will be applied. For example the automotive product “car” or electronic/high-tech products like a “computer”, with a pre-defined set of options to be combined in the individual configuration (so called “Assembled to Order” products), require a different approach to products in heavy machinery, which are (at least partially) engineered in a customer specific way (so-called “Engineered-to Order” products). This article discusses different approaches to variant management. Starting with the simple Bill of Material (BOM), this article presents three different approaches to variant management, which are provided by Agile PLM. Single level BOM and Variant BOM The single level BOM is the basic form of the BOM. The product structure is defined using assemblies and single parts. A particular product is thus represented by a fixed product structure. As soon as you have to manage product variants, the single level BOM is no longer sufficient. A variant BOM will be needed to manage product variants. The variant BOM is sometimes referred to as 150% BOM, since a variant BOM contains more parts and assemblies than actually needed to assemble the (final) product – just 150% of the parts You can evolve the variant BOM from the single level BOM by replacing single nodes with a placeholder node. The placeholder in this case represents the possible variants of a part or assembly. Product structure nodes, which are part of any product, are so-called “Must-Have” parts. “Optional” parts can be omitted in the final product. Additional attributes allow limiting the quantity of parts/assemblies which can be assigned at a certain position in the Variant BOM. Figure 1 shows the variant BOM of Agile PLM. Figure 1 Variant BOM in Agile PLM During the instantiation of the Variant BOM, the placeholders get replaced by specific variants of the parts and assemblies. The selection of the desired or appropriate variants is either done step by step by the user or by applying pre-defined configuration rules. As a result of the instantiation, an independent BOM will be created (Figure 2). Figure 2 Instantiated BOM in Agile PLM This kind of Variant BOM  can be used for „Assembled –To-Order“ type products as well as for „Engineered-to-Order“-type products. In case of “Assembled –To-Order” type products, typically the instantiation is done automatically with pre-defined configuration rules. For „Engineered- to-Order“-type products at least part of the product is selected manually to make use of customized parts/assemblies, that have been engineered according to the specific custom requirements. Template BOM The Template BOM is used for „Engineered-to-Order“-type products. It is another type of variant BOM. The engineer works in a flexible environment which allows him to build the most creative solutions. At the same time the engineer shall be guided to re-use existing solutions and it shall be assured that product variants of the same product family share the same base structure. The template BOM defines the basic structure of products belonging to the same product family. Let’s take a gearbox as an example. The customer specific configuration of the gearbox is influenced by several parameters (e.g. rpm range, transmitted torque), which are defined in the customer’s requirement document.  Figure 3 shows part of a Template BOM (yellow) and its relation to the product family hierarchy (blue).  Figure 3 Template BOM Every component of the Template BOM has links to the variants that have been engineeried so far for the component (depending on the level in the Template BOM, they are product variants, Assembly Variant or single part variants). This library of solutions, the so-called solution space, can be used by the engineers to build new product variants. In the best case, the engineer selects an existing solution variant, such as the gearbox shown in figure 3. When the existing variants do not fulfill the specific requirements, a new variant will be engineered. This new variant must be compliant with the given Template BOM. If we look at the gearbox in figure 3  it must consist of a transmission housing, a Connecting Plate, a set of Gears and a Planetary transmission – pre-assumed that all components are must have components. The new variant will enhance the solution space and is automatically available for re-use in future variants. The result of the instantiation of the Template BOM is a stand-alone BOM which represents the customer specific product variant. Modular BOM The concept of the modular BOM was invented in the automotive industry. Passenger cars are so-called „Assembled-to-Order“-products. The customer first selects the specific equipment of the car (so-called specifications) – for instance engine, audio equipment, rims, color. Based on this information the required parts will be determined and the customer specific car will be assembled. Certain combinations of specification are not available for the customer, because they are not feasible from technical perspective (e.g. a convertible with sun roof) or because the combination will not be offered for marketing reasons (e.g. steel rims with a sports line car). The modular BOM (yellow structure in figure 4) is defined in the context of a specific product family (in the sample it is product family „Speedstar“). It is the same modular BOM for the different types of cars of the product family (e.g. sedan, station wagon). The assembly or single parts of the car (blue nodes in figure 4) are assigned at the leaf level of the modular BOM. The assignment of assembly and parts to the modular BOM is enriched with a configuration rule (purple elements in figure 4). The configuration rule defines the conditions to use a specific assembly or single part. The configuration rule is valid in the context of a type of car (green elements in figure 4). Color specific parts are assigned to the color independent parts via additional configuration rules (grey elements in figure 4). The configuration rules use Boolean operators to connect the specifications. Additional consistency rules (constraints) may be used to define invalid combinations of specification (so-called exclusions). Furthermore consistency rules may be used to add specifications to the set of specifications. For instance it is important that a car with diesel engine always is build using the high capacity battery.  Figure 4 Modular BOM The calculation of the car configuration consists of several steps. First the consistency rules (constraints) are applied. Resulting from that specification might be added automatically. The second step will determine the assemblies and single parts for the complete structure of the modular BOM, by evaluating the configuration rules in the context of the current type of car. The evaluation of the rules for one component in the modular BOM might result in several rules being fulfilled. In this case the most specific rule (typically the longest rule) will win. Thanks to this approach, it is possible to add a specific variant to the modular BOM without the need to change any other configuration rules.  As a result the whole set of configuration rules is easy to maintain. Finally the color specific assemblies respective parts will be determined and the configuration is completed. Figure 5 Calculated Car Configuration The result of the car configuration is shown in figure 5. It shows the list of assemblies respective single parts (blue components in figure 5), which are required to build the customer specific car. Summary There are different approaches to variant management. Three different approaches have been presented in this article. At the end of the day, it is the type of the product which decides about the best approach.  For „Assembled to Order“-type products it is very likely that you can define the configuration rules and calculate the product variant automatically. Products of type „Engineered-to-Order“ ,however, need to be engineered. Nevertheless in the majority of cases, part of the product structure can be generated automatically in a similar way to „Assembled to Order“-tape products.  That said it is important first to analyze the product portfolio, in order to define the best approach to variant management.

    Read the article

  • Simple one-way synchronisation of user password list between servers

    - by Renaud Bompuis
    Using a RedHat-derivative distro (CentOS), I'd like to keep the list of regular users (UID over 500), and group (and shadow files) pushed to a backup server. The sync is only one-way, from the main server to the backup server. I don't really want to have to deal with LDAP or NIS. All I need is a simple script that can be run nightly to keep the backup server updated. The main server can SSH into the backup system. Any suggestion? Edit: Thanks for the suggestions so far but I think I didn't make myself clear enough. I'm only looking at synchronising normal users whose UID is on or above 500. System/service users (with UID below 500) may be different on both system. So you can't just sync the whole files I'm afraid.

    Read the article

  • Radeon HD 5770 - DisplayPort Problem

    - by Nick Schmit
    I am trying to set up 3 monitors with the Radeon HD 5770 I recieved a few days ago. I have read that I can use 2 DVI ports and an active DisplayPort adapter for a third DVI monitor. I have purchased this adapter: http://www.accellcables.com/products/DisplayPort/DP/dp_dvid.htm I have the two normal DVI monitors working, but when I try to extend the display to the third, I get a message saying the system has detected a problem with the connection through the display port which may limit resolution/refresh rate. I know that all the cables are fine, I have replaced the adapter, the monitors all work, I have tried different monitors through the adapter, and even nothing but the adapter, but I cannot get any monitor to work using it. I am trying to use 2 Dell monitors and an Acer. Have I overlooked something? Is there a compatibiliy issue I missed? Any suggestions as to what I could try? Thanks in advance.

    Read the article

  • How to configure nginx to serve static contents from RAM?

    - by Vijayendra Tripathi
    I want to set up nginx as my web server. I want to have image files cached in the memory (RAM) rather then disk. I am serving a small page and want few images always served from RAM. I dont wish to use varnish (or any other such tools) for this as I believe nginx has a capability to cache contents into RAM. I am not sure as how may I configure nginx for this? I did try few combinations but they didn't work. nginx uses disk all the time to get images. For example, when I tried apache benchmark to test with following command - ab -c 500 -n 1000 http://localhost/banner.jpg I get following error - socket: Too many open files (24) I guess this means nginx is trying to open to many files simultaneously from the disk and OS is not allowing this operation. Can anyone please suggest me a correct configuration? Thanks for considering this message.

    Read the article

  • Animating DOM elements vs refreshing a single Canvas

    - by mgibsonbr
    A few years ago, when the HTML Canvas element was still kinda fresh, I wrote a small game in a rather "unusual" way: each game element had its own canvas, and frequently animated elements even had multiple canvases, one for each animation sprite. This way, the translation would be done by manipulating the DOM position of the canvases, while the sprite animation would consist of altering the visibility of the already drawn canvases. (z-indexes, of course, were the tricky part) It worked like a charm: even in IE6 with excanvas it showed a decent performance, and everything was rather consistent between browsers, including some smartphones. Now I'm thinking in writing a larger game engine in the same fashion, so I'm wondering whether it would be a good idea to do so in the current context (with all the advances in browsers and so on). I know I'm trading memory for time, so this needs to be customizable (even at runtime) for each machine the game will be running. But I believe using separate canvases would also help to avoid the game "freezing" on CPU spikes, since the translation would still happen even if the redraws lag for a while. Besides, the browsers' rendering engines are already optimized in may ways, so I'm guessing this scheme would also reduce the load on the CPU (in contrast to doing everything in JavaScript - specially the less optimized ones). It looks good in my head, but I'd like to hear the opinion of more experienced people before proceeding further. Is there any known drawback of doing this? I'm particulartly unexperienced in dealing with the GPU, so I wonder whether this "trick" would nullify any benefit of using a single, big canvas. Or maybe on modern devices it's overkill (though I'm skeptic about the claims that canvas+js - especially WebGL - will ever be a good alternative to native code). Any thoughts?

    Read the article

  • Looking for 2D Cross platform suggestions based on requirements specified

    - by MannyG
    I am an intermediate developer with minor experience on enterprise mobile applications for iphone, android and blackberry looking to build my first ever mobile game. I did a google search for some game dev forums and this popped up so I thought I would try posting here as I lack luck elsewhere. If you have ever heard of the game for the iphone and android platform entitled avatar fight then you will have an idea of the graphic capabilities I require. Basically the battles which are automated one sprite attacking another doing cool animations but all in 2d. My buddy and I have two motivations, one is to jump into mobile Dev as my experience is limited as is his so we would like some trending knowledge (html5 would be nice to learn) . The other is to make some money on the side, don't expect much but polishing the game and putting our all will hopefully reward us a bit. We have looked into corona engine, however a lot of people are saying it is limited in the graphics department, we are open to learning new languages like lua, c++, python etc. Others we have looked at include phonegap, rhomobile, unity, and the list goes on. I really have no idea what the pros and cons of these are but for a basic battle sequence and some mini games we want to chose the right one. Some more things that we will be doing include things like card games, side scrolling flying object based games, maybe fishing stuff. We want to start small with these minigames and work our way up to the idea we would like to implement in the future. We only want to work in 2D. So with these requirements please help me chose a platform to work on (cross platform is what we are ideally leaning towards). Please feel free to throw in some pieces of advice you may have for newbie game developers like myself too. Thank you for reading!

    Read the article

  • Groovy Debugging

    - by Vijay Allen Raj
    Groovy Debugging - An Overview:ADF BC developers may express snippets of business logic (like the following) as embedded groovy expressions: default / calculated attribute valuesvalidation rules / conditionserror message tokensLOV input values (VO) This approach has the advantages that: Groovy has a compact, EL-like syntax for expressing simple logicADF has extended this syntax to provide useful built-insembedded Groovy expressions are customizableGroovy debugging support helps improve maintainability of business logic expressed in Groovy.Following is an example how groovy debugging works.Example:This example shows how a script expression validator can be created and the groovy script debugged. It shows Step over, breakpoint functionalities as well as syntax coloring.Let us create a ADFBC application based on Emp and Dept tables, and add a script expression validator based on the script:  if (Sal >= 5000){ //If EmpSal is greater than a property value set on the custom //properties on the root AM //raise a custom exception else raise a custom warning if (Sal >= source.DBTransaction.rootApplicationModule.propertiesMap.salHigh) { adf.error.raise("ExcGreaterThanApplicationLimit"); } else { adf.error.warn("WarnGreaterThan5000"); } } else if (EmpSal <= 1000) { adf.error.raise("ExcTooLow"); }return true;In the Emp.xml Flat editor, place breakpoints at various locations as shown below:Right click the appmodule and click Debug. Enter a value greater than 5000 and click next. You can see the debugging work as shown below:  The code can be also be stepped over and debugged.

    Read the article

  • Discuss: PLs are characterised by which (iso)morphisms are implemented

    - by Yttrill
    I am interested to hear discussion of the proposition summarised in the title. As we know programming language constructions admit a vast number of isomorphisms. In some languages in some places in the translation process some of these isomorphisms are implemented, whilst others require code to be written to implement them. For example, in my language Felix, the isomorphism between a type T and a tuple of one element of type T is implemented, meaning the two types are indistinguishable (identical). Similarly, a tuple of N values of the same type is not merely isomorphic to an array, it is an array: the isomorphism is implemented by the compiler. Many other isomorphisms are not implemented for example there is an isomorphism expressed by the following client code: match v with | ((?x,?y),?z = x,(y,z) // Felix match v with | (x,y), - x,(y,z) (* Ocaml *) As another example, a type constructor C of int in Felix may be used directly as a function, whilst in Ocaml you must write a wrapper: let c x = C x Another isomorphism Felix implements is the elimination of unit values, including those in tuples: Felix can do this because (most) polymorphic values are monomorphised which can be done because it is a whole program analyser, Ocaml, for example, cannot do this easily because it supports separate compilation. For the same reason Felix performs type-class dispatch at compile time whilst Haskell passes around dictionaries. There are some quite surprising issues here. For example an array is just a tuple, and tuples can be indexed at run time using a match and returning a value of a corresponding sum type. Indeed, to be correct the index used is in fact a case of unit sum with N summands, rather than an integer. Yet, in a real implementation, if the tuple is an array the index is replaced by an integer with a range check, and the result type is replaced by the common argument type of all the constructors: two isomorphisms are involved here, but they're implemented partly in the compiler translation and partly at run time.

    Read the article

  • Collapsing Bookmarks

    - by Tim Dexter
    I said I would tackle documenting some of the new features in the 10.1.3.4.1 roll up patch I mentioned last week. With the patch you can now set the default state of bookmarks (if you create them) in your PDF outputs. If your users prefer to see them all collapsed to the base level or may be collapsed to the second level to ease navigation; whatever they need. Its another opportunity for you to look like a star! You of course need to start with a table of contents; then add the convert|copy to bookmarks command. You can then add the new collapse command to set the appropriate level in the bookmarks. <?copy-to-bookmark:?> <?collapse-bookmark:show;2?> <<< Table of Contents >>> <?end convert-to-bookmark?> The command allows you to expand or collapse the bookmarks as you need. Of course you will know how many levels you will have in the final output document. The command takes the form: <?collapse-bookmark:show|hide;level int?> Some examples <?collapse-bookmark:hide;1?> <?collapse-bookmark:hide;2?> <?collapse-bookmark:hide;3?> Sample template and data here. Dont forget you need that 10.1.3.4.1 roll up!

    Read the article

< Previous Page | 399 400 401 402 403 404 405 406 407 408 409 410  | Next Page >