Search Results

Search found 18191 results on 728 pages for 'single board'.

Page 403/728 | < Previous Page | 399 400 401 402 403 404 405 406 407 408 409 410  | Next Page >

  • Extract Certs from Apache

    - by user271619
    Recently I've had to uninstall a single Self-Signed SSL Certificate from one of my Apache boxes, specifically for an outside party. That's not really a problem for me, since it was easy. What confuses me is how they knew I had a self-signed certificate. The domain I provided them was not related to the domain with the self-signed certificate. Does this mean Apache publicizes the Virtual hosts in the httpd.conf file? I asked the outside party what software they used to extract information from my server, and they provided this GitHub link: https://gist.github.com/4ndrej/4547029 I figured I'd ask the community first, before I attempt installing the Java program.

    Read the article

  • Updating wordpress in a multi-node environment

    - by Peter
    I'm finding this very tricky in a multi node environment, with code under revision control. AKA. multiple frontends and single database. I have a deployment process that pushes a git repo to the servers, but obviously if I update Wordpress from within the admin panel, it will update the files to one FE. Then I would need to copy over the new files to the other FE nodes. Plus, whenever these changes are written when Wordpress updates on a node, it writes code into the git repo. As such, it then breaks the auto deploys that perform 'git pulls', as it then has untracked changes and refuses to pull in new deploys unless manually intervened. How does one easily keep Wordpress updated in a multi node (load balanced) environment?

    Read the article

  • Wordpress 3 multi-site install

    - by mike
    Hello, Trying to figure out if this is possible... My company has a cms product that was written in Java and we decided to use Wordpress to run blogs for our clients. Obviously, Wordpress does not run on tomcat(at least not by default) so we installed Pound(http://www.apsis.ch/pound/) on our server and have setup any Apache and Tomcat on different ports. When "/blog/" is requested, the request is directed to Apache. This works fine but we would like to use Wordpress multi site so that we can manage all the blogs from a single interface. We would also like the url for every site to be "/blog/" example: http://www.site1.com/blog/ http://www.site2.com/blog/ I'm thinking it would have to be done with apache??? Is it even possible? Thanks!

    Read the article

  • Layers - Logical seperation vs physical

    - by P.Brian.Mackey
    Some programmers recommend logical seperation of layers over physical. For example, given a DL, this means we create a DL namespace not a DL assembly. Benefits include: faster compilation time simpler deployment Faster startup time for your program Less assemblies to reference Im on a small team of 5 devs. We have over 50 assemblies to maintain. IMO this ratio is far from ideal. I prefer an extreme programming approach. Where if 100 assemblies are easier to maintain than 10,000...then 1 assembly must be easier than 100. Given technical limits, we should strive for < 5 assemblies. New assemblies are created out of technical need not layer requirements. Developers are worried for a few reasons. A. People like to work in their own environment so they dont step on eachothers toes. B. Microsoft tends to create new assemblies. E.G. Asp.net has its own DLL, so does winforms. Etc. C. Devs view this drive for a common assembly as a threat. Some team members Have a tendency to change the common layer without regard for how it will impact dependencies. My personal view: I view A. as silos, aka cowboy programming and suggest we implement branching to create isolation. C. First, that is a human problem and we shouldnt create technical work arounds for human behavior. Second, my goal is not to put everything in common. Rather, I want partitions to be made in namespaces not assemblies. Having a shared assembly doesnt make everything common. I want the community to chime in and tell me if Ive gone off my rocker. Is a drive for a single assembly or my viewpoint illogical or otherwise a bad idea?

    Read the article

  • Odd FTP ASCII/BINARY transfer behavior after migrating to a new server

    - by Incognita Mundi
    I recently got a dedicated server and after migration to the new machine I started noticing problems with file transfers. My FTP client, despite being set to auto, keeps uploading php files in binary mode and the content of those files is messed up. Since I mostly upload files of different kinds it would be annoying to change from binary to ASCII every single time, beside, I never had such problems. What could be the cause of this behavior? My dedicated server runs CENTOS 6.4 and the ftp server is Pure-FTPd. I tried different FTP clients and they all have the same problem so I assume is soem misconfiguration on the server side. Thanks

    Read the article

  • How can I convert an ordinary text file to a .csv file, and import it to Excel?

    - by Xavierjazz
    I have a group of names and addresses that I would like to import into Outlook. At the moment I have imported them into Excel, but all names and addresses are in one long entry. All are already separated by a comma. How can I get Excel to select each "value" and move it to a separate cell? Edit: I had already tried taking a text file and saving it as a .csv file. However, all contacts load into a single cell. I am using Excel 2003. Thanks.

    Read the article

  • Why job postings always looking for "rockstars?" [closed]

    - by Xepoch
    I have noticed a recent trend in requesting programmers who are rockstars. I get it, they're looking for someone who is really good at what they do. But why (pray) make the reference to a rockstar? Do these companies really want these traits as a real rockstar? Party all night and wake up to take care of quick business in the morning? Substance abuse, Narcissism with celebrity, Compensation well exceeding their management, Excellent at putting on a short-lived show, Entertainment instead of value, 1 hit (project) wonders or single-genre performers, Et cetera What is wrong with Senior or Principal Software Engineer who has an established and proven passion for the business? Rather do we mean quite the opposite, someone who: rolls up the sleeves and gets to work, takes appropriate direction and helps influence teams, programs in lessons' learned and proper practices, provides timely communication to the whole team, can code and understand multiple languages, understands the science and theory behind computation, Is there a trend to diversify the software engineering ranks? How many software rockstars can you hire before your band starts breaking up? Sure, there are lots of folks doing this stuff on their own, maybe even a rare few who do coding for show, but I wager the majority is for business. I don't see ads for rockstar accountants, or rockstar machinists, or rockstart CFOs. What makes the software programmer and their hiring departments lean towards this kind of job title?

    Read the article

  • Packaging MATLAB (or, more generally, a large binary, proprietary piece of software)

    - by nfirvine
    I'm trying to package MATLAB for internal distribution, but this could apply to any piece of software with the same architecture. In fact, I'm packaging multiple releases of MATLAB to be installed concurrently. Key things Very large installation size (~4 GB) Composed of a core, and several plugins (toolboxes) Initially, I created a single "source" package (matlab2011b) that builds several .debs (mainly matlab2011b-core and matlab2011b-toolbox-* for each toolbox). The control file is just the standard all: dh $@ There is no Makefile; only copying files. I use a number of debian/*.install files to specify files to copy from a copy of an installation to /usr/lib/. The problem is, every time I build the thing (say, to make a correction to the core package), it recopies every file listed in the *.install file to e.g debian/$packagename/usr/ (the build phase), and then has to bundle that into a .deb file. It takes a long time, on the order of hours, and is doing a lot of extra work. So my questions are: Can you make dh_install do a hardlink copy (like cp -l) to save time? (AFAICT from the man page, no.) Maybe I should just get it to do this in the Makefile? (That's gonna b e big Makefile.) Can you make debuild only rebuild .debs that need rebuilding? Or specify which .debs to rebuild? Is my approach completely stupid? Should I break each of the toolboxes into its own source package too? (I'll have to do some silly templating or something, because there's hundreds of them. :/)

    Read the article

  • Converting PDF portfolios to plain text (pdftotext?)

    - by Andrea
    I am trying to convert a large number of PDFs (~15000) to plain text using pdftotext. This is working pretty well except for a few of the PDFs (~600) which, I guess, are "PDF portfolios." When I run these PDFs through pdftotext, it just outputs: For the best experience, open this PDF portfolio in Acrobat 9 or Adobe Reader 9, or later. Get Adobe Reader Now! If I do open these PDFs in Adobe Reader, they look like two or more PDFs inside a single file. Has anyone encountered this issue before? Is there any tool I can use to convert these PDFs automatically? (Either directly to text or at least to regular PDFs that pdftotext can then understand.)

    Read the article

  • Is using a dedicated thread just for sending gpu commands a good idea?

    - by tigrou
    The most basic game loop is like this : while(1) { update(); draw(); swapbuffers(); } This is very simple but have a problem : some drawing commands can be blocking and cpu will wait while he could do other things (like processing next update() call). Another possible solution i have in mind would be to use two threads : one for updating and preparing commands to be sent to gpu, and one for sending these commands to the gpu : //first thread while(1) { update(); render(); // use gamestate to generate all needed triangles and commands for gpu // put them in a buffer, no command is send to gpu // two buffers will be used, see below pulse(); //signal the other thread data is ready } //second thread while(1) { wait(); // wait for second thread for data to come send_data_togpu(); // send prepared commands from buffer to graphic card swapbuffers(); } also : two buffers would be used, so one buffer could be filled with gpu commands while the other would be processed by gpu. Do you thing such a solution would be effective ? What would be advantages and disadvantages of such a solution (especially against a simpler solution (eg : single threaded with triple buffering enabled) ?

    Read the article

  • Change Keybindings (hardware to software)

    - by Daniel
    I ran a search for this, but the answers I saw were referring to something altogether different than what I'm asking for. So let me clarify: I'm not asking how to change key-combo shortcuts. I'm asking--how do you actually change what your computer thinks you did when you press a given key? An example of what I mean (and the reason I'm asking). I'm a Chrome user, and I use Windows alongside Ubuntu. I own a Lenovo Thinkpad T61p--it came with my scholarship package, and I would have shopped for a nice computer if I could have. The T61p has two buttons above the left and right arrow keys that relate to browser commands to go back and forth one page. This is extremely frustrating for me, as I use the arrow keys, and a single accidental keystroke will catch me going back a page, losing temporary data, and yelling at my stupid keyboard. At the same time, I'm the type of person who keeps way too many tabs open. Chrome doesn't let me refigure keyboard shortcuts, and the only way it allows you to switch between tabs are ctrl+tab and ctrl+shift+tab, and ctrl+page up/down. I was using Notepad++, and they had finally found the solution to both problems! The page back and forth keys functioned as tab back and forth keys. I went through quite some effort to learn how to change the keybindings in Windows. The page back and page forward keys are now the page up and page down keys, respectively, and if I hit control, they let me switch tabs easily, and rather pleasantly. And if I hit the keys by accident, no harm, no foul. Alas, I'm in Ubuntu now, and I need to go through the process again. And while I couldn't just find the answer online, like I did for Windows, I know Ubuntu has nice, supportive communities like this one, where, hopefully, somebody can tell me how to do either what I did in Windows, or directly make it so that my computer changes tabs when I hit those buttons (removing the ctrl button from the tab-changing command).

    Read the article

  • How is it possible for SSD's drives to have such a good latency?

    - by tigrou
    First time i read some information about SSD's, i was surprised to learn they internally use NAND flash chips. This kind of memory is generally slow (low bandwidth) and have high latency while SSD's are just the opposite. But here is how it works : SSD drives increase their bandwidth by using several NAND flash chips in parallel. In other words, they do some data striping (aka RAID0) across several chips (done by the controller). What i don't understand is how SSD's drives have such a low latency, whereas they are using NAND chips? (or at least lot better than what a typical single NAND chip would do) EDIT: I think under-estimate NAND chip capabilities. USB drives, while powered by NAND's are mostly limited by USB protocol (which have a pretty high latency) and the USB controller. That explain their poor performance in some cases.

    Read the article

  • Mount linux partition as Windows network share over internet

    - by CptEO
    I have a Linux server running RHEL 6. I have two Windows servers. All servers are connected directly to the web with an external IP, they are not in a local lan. What I would like to achieve is to setup the Linux server so that it offers a single share (the whole partition) that can be mounted as network drive within Windows. I don't want to use any 3rd party software to access the linux server because I want to use the linux server as a backup for Bare Metal Restore. In order to do so, I need to be able to access the linux partition from within the Windows Recovery Enviroment where I cannot install any 3rd party software. The linux server should only be accessible from given IP addresses (e.g. the 2 windows servers). Does anyone know if the setup I would like to have is possible?

    Read the article

  • Catch headset pause/play keypresses in Windows

    - by akshay2000
    I have a new Ultrabook which has single audio jack for input and output instead for separate 3.5 mm jacks we used to have on older machines. The jack is probably similar to American Audio Jack specification or like the one found on Macbook Pro. I have tried to use it with the Apple, HTC, Nokia earphones which ship with most of the smartphones. Microphone on the headset works the way it should. Thing is that the headsets also come with remote controls to control volume and playback. I am sure that those key presses are sent to the Windows. I was hoping to catch those events and bind those to actual media keys so that I can control music playback. I guess this happens on Macs. I want to do the similar thing on the Windows. I'm just not sure where I can catch the events. Driver level? Application level?

    Read the article

  • overload environment

    - by Richo
    I've recently switched across to nesting my home directory across all my machines in an svn repo, meaning that my utility scripts, configuration (irssi, vim, zsh, screen etc) as well as my .profile and so forth are easier to keep up to date across all the places I login. I use a set of sourced .local files to override them on a per site basis as required. As it stands, many of my scripts inherit some form of configuration, and for the most part I've been setting an environment variable in .profile, and then if needed on a per site basis overriding it in .profile.local This works great, but are there pitfalls in having a stack of environment variables? If I take my default environment from within an X session before any of my personal configuration I have not even increased it by 50% but some of the machines I work on are low resource, am I bloating my system unneccessarily, or being needlessly paranoid? Should I start moving this config into seperate flatfiles that are loaded as needed? This means extra infrastructure, or alternately writing a single module for storing config that all of my utilities can inherit.

    Read the article

  • What are solutions and tradeoffs to maintain search result consistency in a web application

    - by iammichael
    Consider a web application with a custom search function that must display the results in a paged manner (twenty per page with up to hundreds of thousands of total results) and the ability to drill down to individual results that maintain next/previous links to navigate through the results. Re-executing the search on each page request to get the appropriate results for that page of data can be too expensive (up to 15s per search). Also, since the underlying data can change frequently (e.g. addition of new results), re-executing could cause the next/previous functionality to result in inconsistent behavior (e.g. the same results reappearing on a later page after having been viewed on an earlier page). What options exist to ensure the search results can be viewed across multiple pages in a consistent manner, and what tradeoffs does each option have in terms of network, CPU, memory, and storage requirements? EDIT: I thought caching the query search results was an obvious necessity. The question is really asking about where to cache the result set and what tradeoffs might exist to each. For example, storing the ids of the entities in the result set on the client, or storing the IDs of the entities themselves in the users session on the web server, or in a temporary table in the database. I'm not looking specifically for a single solution as different scenarios may result in different approaches (and such a question would be more suited for stackoverflow.com rather than here), but more of a design comparison between the possible approaches.

    Read the article

  • OS X: What does the '@' attribute on a file mean?

    - by claytontstanley
    On a Snow Leopard machine, at the Terminal: la ~/src/rmcl/ | grep RMCL -rw-r--r--@ 1 claytonstanley staff 6766167 Nov 13 2009 RMCL What is that '@' attribute? This file is part of an older OS X program that runs under Rosetta. I'm having issues where some older programs running under Rosetta require the @ attribute when opening files. But I'm not sure what that attribute is, so I have no way to know how to add/remove it. I did try a thorough Google search on this, but I wasn't able to find the answer. I would have thought this would be an easy one to find. Maybe the Google query isn't acting properly because of the single @ special character. Any info. is much appreciated. Thanks!

    Read the article

  • Using nested public classes to organize constants

    - by FrustratedWithFormsDesigner
    I'm working on an application with many constants. At the last code review it came up that the constants are too scattered and should all be organized into a single "master" constants file. The disagreement is about how to organize them. The majority feel that using the constant name should be good enough, but this will lead to code that looks like this: public static final String CREDITCARD_ACTION_SUBMITDATA = "6767"; public static final String CREDITCARD_UIFIELDID_CARDHOLDER_NAME = "3959854"; public static final String CREDITCARD_UIFIELDID_EXPIRY_MONTH = "3524"; public static final String CREDITCARD_UIFIELDID_ACCOUNT_ID = "3524"; ... public static final String BANKPAYMENT_UIFIELDID_ACCOUNT_ID = "9987"; I find this type of naming convention to be cumbersome. I thought it might be easier to use public nested class, and have something like this: public class IntegrationSystemConstants { public class CreditCard { public static final String UI_EXPIRY_MONTH = "3524"; public static final String UI_ACCOUNT_ID = "3524"; ... } public class BankAccount { public static final String UI_ACCOUNT_ID = "9987"; ... } } This idea wasn't well received because it was "too complicated" (I didn't get much detail as to why this might be too complicated). I think this creates a better division between groups of related constants and the auto-complete makes it easier to find these as well. I've never seen this done though, so I'm wondering if this is an accepted practice or if there's better reasons that it shouldn't be done.

    Read the article

  • Difference between SSLCertificateFile and SSLCertificateChainFile?

    - by chrisjlee
    Normally with a virtual host an ssl is setup with the following directives: Listen 443 SSLCertificateFile /home/web/certs/domain1.public.crt SSLCertificateKeyFile /home/web/certs/domain1.private.key SSLCertificateChainFile /home/web/certs/domain1.intermediate.crt From: For enabling SSL for a single domain on a server with muliple vhosts, will this configuration work? What is the difference between SSLCertificateFile and SSLCertificateChainFile ? The client has purchased a CA key from GoDaddy. It looks like GoDaddy only provides a SSLCertificateFile (.crt file), and a SSLCertificateKeyFile (.key file) and not at SSLCertificateChainFile. Will my ssl still work without a SSLCertificateChainFile path specified ? Also, is there a canonical path where these files should be placed?

    Read the article

  • Hyper V cluster - one VM won't migrate

    - by Chris W
    We have a Failover Cluster built up on 6 blades, each running Hyper V. Each box is running Server 2008 R2. We've got a number of VMs running that all have the same basic config: VHD stored on a cluster shared volume. 2 virtual NICs (1 for LAN connection and 1 for SAN connection). All of our VMs will happily migrate between any other blade apart from one single VM which is running fine on it's current blade but will not migrate to any other location. What could be the cause of it or where should I look to get a detailed error message as I can't seem to find much information logged in any of the logs. Edit: I know the usual culprit is mis-matching resource names. We've already been there with the NICs named differently on some of the blades. As far as we can tell now everything looks to be identical on each bit of metal.

    Read the article

  • Random compositing lag

    - by user1020567
    My laptop specs: 512 mb of RAM, out of which 64 mb are shared with an integrated GPU - ATI Radeon Xpress 200 M. Intel 1,6 Ghz Celeron M single-core processor. I've spent months trying to figure out why compositing and effects sometimes lag on any distro I try. Now I've come to realise that no matter what drivers I try (the default ones work for me on pretty much any linux) compositing lag is random. When I used Ubuntu 10.10, for example, sometimes window compositing would lag and sometimes it wouldn't. The PC is able to render those effects so hardware is not the problem. It's completely random and unpredictable - sometimes when I turn on the computer the effects lag horribly and sometimes it's completely smooth. I've also checked startup items and there doesn't seem to be any unnecessary entries. I also tried building my own OS with Arch Linux and the problem persists there, therefore I can only assume that it's a driver issue of some sort. By default there are lots of drivers supplied with linux distributions. Could it be that they're in the way? The ones that I need are ati/radeon (or both? What's the difference between them?) and there seem to be a lot of others... What should I do?

    Read the article

  • Combine OS partion with data partition on NAS4Free/FreeNAS

    - by Pak
    I recently built a NAS4Free (formerly FreeNAS) machine using a 256MB (yes, MB) USB drive for the OS. When I did the original install, I had the bright idea of making the OS partition just big enough for the OS and a then creating a second partition using the remainder of the drive to store stuff pertaining to the OS. I never really found a use for the data partition and I ended up running out of space on the OS partition, so now I'd like to combine the partitions into a single partition. Is this something that is possible to do while everything is up and running? If it comes down to it, I can take down the machine and do a fresh install of the OS using the entire space of the USB drive, but I'd like to use this as an opportunity to better familiarize myself with FreeBSD/UNIX type systems. If this is possible, will it interfere with the NAS4Free things? The data partition shows up in the web interface under the disks section. If I end up manually changing the partitions, I'd be concerned with NAS4Free getting confused by the missing partition.

    Read the article

  • options for deploying application

    - by terence
    I've created a simple web application, a self-contained tool with a user system. I host it publicly for everyone to use, but I've gotten some requests to allow companies to host the entire application privately on their internal systems. I have no idea what I'm doing - I have no experience with deployment or server stuff. I'm just some person who learned enough JS and PHP to make a tool for my own needs. The application runs with Apache, MySQL, and PHP. What's the best way to package my application to let others run it privately? I'm assuming there's better options than just sending them all the source code. I'd like to find a solution that is: Does not require support to set up (I'm just a single developer without much free time) Easy to configure Easy to update Does there exist some one-size-fits all thing that I can give to someone, they can install it, and bam, now when they go to http://myapplication/ on their intranet, it works? Thanks for your help.

    Read the article

  • Is the Entity Component System architecture object oriented by definition?

    - by tieTYT
    Is the Entity Component System architecture object oriented, by definition? It seems more procedural or functional to me. My opinion is that it doesn't prevent you from implementing it in an OO language, but it would not be idiomatic to do so in a staunchly OO way. It seems like ECS separates data (E & C) from behavior (S). As evidence: The idea is to have no game methods embedded in the entity. And: The component consists of a minimal set of data needed for a specific purpose Systems are single purpose functions that take a set of entities which have a specific component I think this is not object oriented because a big part of being object oriented is combining your data and behavior together. As evidence: In contrast, the object-oriented approach encourages the programmer to place data where it is not directly accessible by the rest of the program. Instead, the data is accessed by calling specially written functions, commonly called methods, which are bundled in with the data. ECS, on the other hand, seems to be all about separating your data from your behavior.

    Read the article

  • Why is a software development life-cycle so inefficient?

    - by user87166
    Currently the software development lifecycle followed in the IT company I work at is: The "Business" works with a solution manager to build a Business Requirement document The solution manager works with the Program manager to build a Functional Spec The PM works with the engineering lead to develop a release plan and with the engineering team to develop technical specifications If there are any clarifications required, developers contact the PM who contacts the solution manager who contacts the business and all the way back introducing a latency of nearly 24 hours and massive email chains for any clarifications By the time the tech spec is made, nearly 1 month has passed in back and forth Now, 2 weeks go to development while the test writes test cases Code is dropped formally to test, test starts raising bugs. Even if there is 1 root cause for 10 different issues, and its an easily fixed one, developers are not allowed to give fresh code to test for the next 1 week. After 2-3 such drops to test the code is given to the ops team as a "golden drop" ( 2 months passed from the beginning) Ops team will now deploy the code in a staging environment. If it runs stable for a week, it will be promoted to UAT and after 2 weeks of that it will be promoted to prod. If there are any bugs found here, well, applying for a visa requires less paperwork This entire process is followed even if a single SSRS report is to be released. How do other companies process such requirements? I'm wondering why, the business cannot just drop the requirements to developers, developers build and deploy to UAT themselves, expose it to the business who raise functional bugs and after fixing those promote to prod. (even for more complex stuff)

    Read the article

< Previous Page | 399 400 401 402 403 404 405 406 407 408 409 410  | Next Page >