Search Results

Search found 16903 results on 677 pages for 'single responsibility'.

Page 129/677 | < Previous Page | 125 126 127 128 129 130 131 132 133 134 135 136  | Next Page >

  • Making audio CDs en mass - Linux based solutions?

    - by The Journeyman geek
    My mom's sings and gives away cds to people. Invariably it falls to me to have to burn cds for her, and burning 50-100 cds on a single drive is a pain. I DO have a handful of cd burners and a slightly geriatric old PIII 450. This is what i want to be able to do - either point an application at a folder of WAV or MP3s, say how many copies i need on CLI (since then i can SSH into the system and use it headless) feed 2 or more CD burners cds until its done, OR pop in a single CD into a master drive and have its contents duplicated to 2 or more burners. I'd rather have it running on linux, be command line based, and be as little work as possible - almost automatic short of telling it how many copies i want would be ideal. I'm sure i'll have people wondering about legality - My mom sings her own music, and its classical, and older than copyright law, so, that's a non issue. I just want a way to make this chore a little easier, short of telling my mom to do it herself.

    Read the article

  • Using a CDN for CMS software (multiple sites)

    - by SmokeyPHP
    I'm currently researching ideas for the media management side of a CMS I'm writing. I was looking at having images served from a CDN which is fine on a single site, but I want all sites that run the CMS to make use of a CDN (which will most likely be a custom developed one, rather than a third party service like S3). My main question is: Is a multi-site CDN a good idea? I can't think of a downside, but have probably missed something - obviously they won't share the same folder, as I invisage the requests to be css.cdnsite.com/example.com/style.css or something along those lines. Having multiple sites in the same place will obviously make it easier for us to manage, as well as being cheaper, but then I wonder if it'll be worth it... Long story short: How should the CMS handle user uploaded media (separate installations) Just keep a local copy of all assets and serve them from the same site, like in days of yore? Keep a local copy, force site to use www. and have CDN subdomains per site? Or use a single separate CDN for all sites? Apologies for the length of this question, not sure if this should be multiple questions or not, as all parts are kind of related and could affect each other.

    Read the article

  • recommendation for configuration for a multi-core guestOS

    - by reidLinden
    Hi there, I've just received an upgraded Host machine, and am looking to push some of those advances to my workstations Guest OS(s). In particular, I used to have a single processor, with 2 cores, so my guestOS only had 1/1. Now, I've got a single processor with 8 cores, so I'm curious about what would be recommended for my GuestOS now? 1 processor/4 cores? 2 processors/2Cores? 4 processors/1 core? My instinct says to stick with the number of physical processors (or less), but, is that based on reality? I spent a good while looking for an answer to this, but perhaps my google-karma isn't in my favor today. Suggestions?

    Read the article

  • MySQL slow query log logging all queries

    - by Blanka
    We have a MySQL 5.1.52 Percona Server 11.6 instance that suddenly started logging every single query to the slow query log. The long_query_time configuration is set to 1, yet, suddenly we're seeing every single query (e.g. just saw one that took 0.000563s!). As a result, our log files are growing at an insane pace. We just had to truncate a 180G slow query log file. I tried setting the long_query_time variable to a really large number to see if it stopped altogether (1000000), but same result. show global variables like 'general_log%'; +------------------+--------------------------+ | Variable_name | Value | +------------------+--------------------------+ | general_log | OFF | | general_log_file | /usr2/mysql/data/db4.log | +------------------+--------------------------+ 2 rows in set (0.00 sec) show global variables like 'slow_query_log%'; +---------------------------------------+-------------------------------+ | Variable_name | Value | +---------------------------------------+-------------------------------+ | slow_query_log | ON | | slow_query_log_file | /usr2/mysql/data/db4-slow.log | | slow_query_log_microseconds_timestamp | OFF | +---------------------------------------+-------------------------------+ 3 rows in set (0.00 sec) show global variables like 'long%'; +-----------------+----------+ | Variable_name | Value | +-----------------+----------+ | long_query_time | 1.000000 | +-----------------+----------+ 1 row in set (0.00 sec)

    Read the article

  • mac & windows backup solutions - Offsite Backups

    - by Kristiaan
    Im looking for some advice on a system Im looking to impliment within our company, but so far I have not found an adequate solution too. I need to provide my users with a way to backup there laptops whilst in the office and if possible offsite as well, we have a mixture of Windows & Mac laptops so software should ideally be multi platform. This is the first time i am attempting to-do something like this as we normally charge the users with responsibility for their backups. I have ruled out most of the services like dropbox, sugarsync (unless one exists that does this) as whilst they does exactly what I want it does not give me any control over restoring / recovering data in the event of the user being unavailable, as it requires their account password to access data.

    Read the article

  • Complex string matching with fuzzywuzzy

    - by That1Guy
    I'm attempting to write a process that matches obscure strings to a single 'master string' for further processing. I have a lot of data that looks something like this: Basketball Basket Ball Football BasketBallR BBall BBall - r FootB ...and so on. These need to be mapped to a master record like so: Basketball = Basket Ball, BBall Basketball - R = BasketBallR, BBall - r I also have instances of data resembling this format: Football -r FootBall - r-g/H,Q,HH These situations need to be separated into different categories before being mapped. For example FootBall - r-g/H,Q,HH should be: Football - r Football - g Football - H Football - Q Football - HH At this point, it still needs to be mapped to a master record... I've tried several different combinations of fuzzywuzzy matching methods, Levenshtein Distance measurements, regex, etc. and can't seem to find a reliable method to logically associate different naming styles of a single item with a master name. I'm throwing my hands up in desperation. Are there any existing python resources than can help sort out my problem? Are there other options? Can anybody point out an obvious option that I might have overlooked? Basically, any suggestion, solution, resource or alternative method is greatly appreciated.

    Read the article

  • More productive alone than in a team?

    - by Furry
    If I work alone, I used to be superproductive, if I want to be. Running prototypes within a day, something that you can deploy and use within a few days. Not perfect, but good enough. I also had this experience a few times when working directly with someone else. Everybody could do the whole thing, but it was more fun not to do it alone and also quicker. The right two people can take an admittedly not too large project onto new levels. Now at work we have a seven person team and I do not feel nearly as productive. Not even nearly. Certain stuff needs to be checked against something else, which then needs to also take care of some new requirement, which just came in three days ago. All sorts of stuff, mostly important, but often just a technical debt from long ago or misconception or different vocabulary for the same thing or sometimes just a not too technically thought out great idea from someone who wants to have their say, and so on. Digging down the rabbit hole, I think to myself, I could do larger portions of this work faster alone (and somewhat better, too), but it's not my responsibility (someone else gets paid for that), so by design I should not care. But I do, because certain things go hand in hand (as you may experience it, when you done sideprojects on your own). I know this is something Fred Brooks has written about, but still, what's your strategy for staying as productive as you know you could be in the cubicle? Or did you quit for some related reason; and if so where did you go?

    Read the article

  • How to partition a 1 TB drive for performance on a windows development machine?

    - by dip
    I saw a similar question for linux, but nothing for windows. I'm getting a new 1TB drive for my dev box @ work. The OS will be Windows 7 Pro with 8GB of RAM and just the single 1TB drive. Backups are not a concern, and I won't be storing large multimedia files. I want the fastest possible performance for general windows usage and for compilation. I will defrag nightly with a smart defragger liker perfectdisk. Should I just go with a single partition, or is there some way I can lay things out for the best performance?

    Read the article

  • Multi Threading - How to split the tasks

    - by Motig
    if I have a game engine with the basic 'game engine' components, what is the best way to 'split' the tasks with a multi-threaded approach? Assuming I have the standard components of: Rendering Physics Scripts Networking And a quad-core, I see two ways of multi-threading: Option A ('Vertical'): Using this approach I can allow one core for each component of the engine; e.g. one core for the Rendering task, one for the Physics, etc. Advantages: I do not need to worry about thread-safety within each component I can take advantage of special optimizations provided for single-threaded access (e.g. DirectX offers a flag that can be set to tell it that you will only use single-threading) Option B ('Horizontal'): Using this approach, each task may be split up into 1 <= n <= numCores threads, and executed simultaneously, one after the other. Advantages: Allows for work-sharing, i.e. each thread can take over work still remaining as the others are still processing I can take advantage of libraries that are designed for multi-threading (i.e. ... DirectX) I think, in retrospect, I would pick Option B, but I wanted to hear you guys' thoughts on the matter.

    Read the article

  • Extracting a line section of mysql backup using sed

    - by carpii
    I occasionally need to extract a single record from a mysqlbackup To do this, I first extract the single table I want from the backup... sed -n -e '/CREATE TABLE.*usertext/,/CREATE TABLE/p' 20120930_backup.sql > table.sql In table.sql, the records are batched using extended inserts (with maybe 100 records per insert before it creates a new line starting with INSERT INTO), so they look like... INSERT INTO usertext VALUES (1, field2 etc), (2, field2 etc), INSERT INTO usertext VALUES (101, field2 etc), (102, field2 etc), ... Im trying to extract record 239560 from this, using... sed -n -e '/(239560.*/,/)/p' table.sql > record.sql Ie.. start streaming when it finds 239560, and stop when it hits the closing bracket But this isnt working as I hoped, it just results in the full insert batch being output. Please can someone give me some pointers as to where Im going wrong? Would I be better off using awk for extracting segments of lines, and use sed for extracting lines within a file?

    Read the article

  • Suggestion: ALLFILES option for RESTORE

    - by Greg Low
    The default action when performing a backup is to append to the backup file yet the default action when restoring a backup is to restore just the first file.I constantly come across customer situations where they are puzzled that they seem to have lost data after they have completed a restore. Invariably, it's just that they haven't restored all the backups contained within a single OS file. This happens most commonly with log backups but also happens when they have not restored the most recent database backup file.It is not trivial to achieve this within simple T-SQL scripts, when the number of backup files within the OS file is unknown. It really should be.I'd like to see a FILES=ALLFILES option on the RESTORE command. For RESTORE DATABASE, it should restore the most recent database backup plus any subsequent log files. For RESTORE LOG (which is the most important missing option), it should just restore all relevant log backups that are contained.If you agree, you know what to do: please vote:  https://connect.microsoft.com/SQLServer/feedback/details/769204/option-to-restore-all-backups-files-within-a-media-setAlternately, how would you write a T-SQL command to restore all log backups within a single OS file where the number of files is unknown? Would love to hear creative solutions because all the ones that I think of are pretty messy and need dynamic SQL. 

    Read the article

  • How to manage Agile developers working with traditional (serial) business persons?

    - by Riggy
    Good afternoon, My work environment has some problems. Our IT team is trying to be more agile, but we're not really getting buy-in from the business. They attend our daily stand-ups and sprint reviews, and they help with sprint planning, but then they turn around and do 4 months of requirements gathering for a project before moving forward with a (mostly) serial development style. The sprint goals are things like "get XX% closer to release". For the IT team, they've turned the Sprints into a sort of death march. We end a Sprint one day and start a new Sprint the very next day. There's no reflection or changes done between sprints, only during. Having never done any of the agile methodologies before, I haven't had a very pleasant introduction to them. So my questions are: 1) Should there be some time (perhaps a week or so) between sprints to do the reflection/introspection/changes/etc.? Or are back-to-back-to-back sprints the norm? 2) Is there any chance for survival for an agile team with no agile business counter-parts? If not, are there some transitional methodologies or even tips for moving the business towards an iterative if not necessarily agile mindset? 3) Should your entire team be on every sprint? We have almost 20 programmers on a single sprint but working on completely different projects (typically teams of 3-5, sometimes larger). Is it normal to have a single sprint or should we be trying to manage multiple independent sprints? Should we be trying to keep the multiple sprints in concurrent lockstep or should their timetables be allowed to overlap and be flexible? Any thoughts or advice is appreciated. This is my first time coming over from SO for a question, so please let me know if there are better ways to phrase these kinds of questions (faq was rather helpful, but still not sure I'm following it perfectly). Thanks!

    Read the article

  • Running Outlook from VB with multiple email addresses [migrated]

    - by Mac
    I am sending emails from my VB6 system and I am having problems with sending a single email to various email addresses. The code is as follows: On Error Resume Next Err.Clear Set oOutLookObject = CreateObject("Outlook.Application") If Err <> 0 Then MsgBox "Email error. Err = " & Err & " Description = " & Err.Description EmailValid = "N" Exit Function End If Set oEmailItem = oOutLookObject.CreateItem(0) If Err <> 0 Then MsgBox "Email error. Err = " & Err & " Description = " & Err.Description EmailValid = "N" Exit Function End If With oEmailItem .Recipients.Add (SMRecipients) .Subject = SMSubject .Importance = IMPORTANCENORMAL .Body = SMBody For i = 1 To 10 If RTrim(SMAttach(i)) <> "" Then .attachments.Add SMAttach(1) 'i) Else Exit For End If Next i .send End With If Err <> 0 Then MsgBox "Email error. Err = " & Err & " Description = " & Err.Description EmailValid = "N" Exit Function End If ''' .Attachments.Add ("c:\temp\test2.txt") Set oOutLookObject = Nothing I have set SMRecipients to a single email address and it is fine but when I add more addresses seperated by semicolons or spaces it only sends to the original address. My system runs under XP. Another point is that it use to find the addresses in the Outlook Address book and where they wetre not specific enough it would display the matching addresses for selection of the correct one. It no longer does this.

    Read the article

  • 32 bit programs can't access Internet in Windows 7 64 bit

    - by korona
    I recently got a new ASUS laptop with Windows 7 Home Premium pre-installed. It worked OK for a while but a couple of days ago, suddenly I couldn't access the Internet any more. After narrowing down the problem, I've reached the conclusion that what's happened is that 32 bit programs are suddenly not able to use the Internet, but 64 bit applications work just fine. Examples of programs that DON'T work any more: Google chrome Firefox Internet Explorer 8 World of Warcraft Examples of programs that DO work: Internet Explorer 8 (64 bit) ping (command line) nslookup (command line) ftp (command line) I'm pretty sure that those command line apps are 64 bit native. A re-install of Windows using the recovery partition on the laptop did fix the problem temporarily, but now it's back again. And I seem to be stuck between a rock and a hard place getting someone to take the responsibility for this; the vendor says to talk to ASUS, ASUS says it's a software issue, and Microsoft doesn't give support on OEM licenses... Does anyone know how to solve this issue?

    Read the article

  • Need recommendations for a hardy scanner that has a robust feeder tray

    - by JohnyD
    In the early days of our company all our information came in on paper and all of what we sold was on paper. Because of this we literally rent our an old bank vault to house the millions of sheets of paper that, some say, still contain relevant information. That being said, I'm looking into purchasing some hardware capable of scanning all these documents and converting them to pdf. Being new at this level of digitization I would like to ask for recommendations for accomplishing this task. Most of this material exists as separate bound studies/articles/etc. Someone would have to remove the bindings and be able to load many pages at a time and have the scanner feed them all through and convert them to a single pdf (single pdf per study/article/etc). If you have any recommendations I would very much appreciate hearing about them, thanks.

    Read the article

  • EXEC() syntax error using ODBC

    - by Mike Trader
    I have written a little ETL application that I wish to run a few lines of TSQL from. If i enter a simple query like "SELECT * FROM MyTable" everything is fine. All single line commands run as expected. A multiline query like this is also fine: DECLARE @TableName NVARCHAR(MAX) set @TableName = 'MyTable' EXECute ( 'DROP TABLE '+ @TableName ) Howevery when I try and run: DECLARE @TableName NVARCHAR(MAX) OPEN Tables FETCH NEXT FROM Tables INTO @TableName WHILE @@FETCH_STATUS = 0 BEGIN EXEC( 'DROP TABLE ' + @TableName ) FETCH NEXT FROM Tables INTO @TableName END I get a syntax error after TABLE in the EXEC() call. I have spent 6 hours trying to figure this out thinking perhaps I need to escape the single quote or something. I just cannot see the problem. A set of fresh eyes would be appreciated.

    Read the article

  • Syncing Large Directories/Filesystems using USB Drive [closed]

    - by Alan Lue
    Does anyone have a solution for syncing large directories/filesystems using just a USB flash drive (and specifically without using a network connection)? The objective is simply to sync a user directory between two computers. The contents of the user directory could amount to a large quantity of data—say, a quantity larger than could be stored on any single USB drive—but the aggregate size of changes that must be propagated by a single sync could easily fit on a USB drive. As an example, suppose a user directory is already synchronized between a desktop and a laptop computer. Here's a use case: Some changes are made in the user directory on the desktop. We mount a USB drive onto the desktop and copy whatever changes need to be applied to the laptop user directory in order to synchronize the desktop and laptop user directories. We now mount the USB drive onto the laptop and apply the changes. The desktop and laptop user directories are now synchronized. Any ideas? Alan

    Read the article

  • Distributed website server redundancy

    - by Keith Lion
    Assume a website infrastructure is very complicated and is fully distributed (probably like most large web companies). Am I right in thinking that although there are all these extra web servers to handle multiple client requests, there is still a single "machine" whereby users must enter? I am guessing this machine will be the one physically associated to the IP address? I ask because I need to know whether, in places where distributed systems exist, there is still a single point of failure- usually the control node or, in this example, the machine connected to the public internet? Surely there cannot be two machines connected to the internet, as they would have to have different IP addresses? This "machine" may not be a server per se, but maybe it is a piece of cisco equipment. I just need to know whether, in the real world, these distributed systems still have a particular section where they depend on the integrity of one electronic device?

    Read the article

  • Installing Solaris 10 on sunT5220 - ZFS/UFS raid 10?

    - by Matthew
    I am in a bit of a time crunch, and need to get two T5220's built. We were very happy to see two boxes in our aged inventory which had 8 HDD's each, but didn't think to check if they were running hardware RAID or not. Turns out that they aren't. When we install, we are given the option to use UFS or ZFS, but when we select a place to install we're only given the option of installing on one single disk. Is it possible to create a software raid 10 across all of the disks and install the OS on that? Sorry if any lingo is wrong, I'm not really a Sun guy and our guru is out of town right now. Any help would be really appreciated! Note: Most of the guides I've found on google entail installing the OS on a single disk, and then creating a separate RAID 10 on other disks. We would actually like the OS to reside on the RAID 10. Hope that clarifies things.

    Read the article

  • How to rotate a group of objects around a common center?

    - by user1662292
    I've made a model in 3D Studio Max 9. It consists of a variety of cubes, clyinders etc. In XNA I've imported the model okay and it shows correctly. However, when I apply rotation, each component in the model rotates around it's own centre. I want the model to rotate as a single unit. I've linked the components in 3D Max and they rotate as I want in Max. protected override void LoadContent() { spriteBatch = new SpriteBatch(GraphicsDevice); model = Content.Load<Model>("Models/Alien1"); } protected override void Update(GameTime gameTime) { camera.Update(1f, new Vector3(), graphics.GraphicsDevice.Viewport.AspectRatio); rotation += 0.1f; base.Update(gameTime); } protected override void Draw(GameTime gameTime) { GraphicsDevice.Clear(Color.CornflowerBlue); Matrix[] transforms = new Matrix[model.Bones.Count]; model.CopyAbsoluteBoneTransformsTo(transforms); Matrix worldMatrix = Matrix.Identity; Matrix rotationYMatrix = Matrix.CreateRotationY(rotation); Matrix translateMatrix = Matrix.CreateTranslation(location); worldMatrix = rotationYMatrix * translateMatrix; foreach (ModelMesh mesh in model.Meshes) { foreach (BasicEffect effect in mesh.Effects) { effect.World = worldMatrix * transforms[mesh.ParentBone.Index]; effect.View = camera.viewMatrix; effect.Projection = camera.projectionMatrix; effect.EnableDefaultLighting(); effect.PreferPerPixelLighting = true; } mesh.Draw(); } base.Draw(gameTime); } More Info: Rotating the object via it's properties works fine so I'm guessing there's something up with the code rather than with the object itself. Translating the object also causes the objects to get moved independently of each other rather than as a single model and each piece becomes spread around the scene. The model is in .X format.

    Read the article

  • How can i use one Domain Controller to manage 3 separate small firms

    - by Plamen Jordanov
    currently we have one Domain Controller that have 15 users and cup off services(hMailServer, IIS, DNS, Active Directory). Now the owners of the firm created two new firms which computers and networks are my responsibility. Now i wonder how exactly to join users in existing domain. Did you think that is a good idea to just include all computers and user from all firms under one domain or there is another solution ? Did some of you run into this kind of situation and what did you do ? ---Edit--- Brent, Dan thank for info guys. For now i will follow Brent advice until we get the new server witch we will virtualize and the old server will be our second DC on different location. Heck we even might think some Pay-as-you-go VPS solution for DC redundancy.

    Read the article

  • Who wants to keep developing?

    - by wcm
    I'm a bit older than most of my peers, having come into programming in my mid 30's. The thing is, I love what I do. Most of my project managers and bosses are my age or younger. I'm really OK with that. I, however, have no desire to climb the company ladder. While I regularly take on the responsibility of making sure that projects get done and my peers often look to me for programming and architectural guidance, I just like writing code and want to keep doing it for as long as possible. Honestly, my only real goal is grow into being a crusty old tech lead until I retire. IF I retire. I would so much rather learn the latest and greatest new technology than PMP my resume. Are there others out there who feel like this because I often feel rather alone in my pathology? EDIT Something I didn't make clear is that I really like helping and mentoring other developers. It makes me feel good and useful and (to be brutaly honest) important.

    Read the article

  • What is a "good" tool to password-protect .pdf files?

    - by Marius Hofert
    What is a "good" tool to encrypt (password protect) .pdf files? (without being required to buy additional software; the protection can be created under linux but the password query should work on Windows, too) I know that zip can do it: zip zipfile_name_without_ending -e files_to_encrypt.foo What I don't like about this is that for a single file, you have to use Winzip to open the zip and then click the file again. I rather would like to be prompted for a password when opening the .pdf (single file case). I know that pdftk can do this: pdftk foo.pdf output foo_protected.pdf user_pw mypassword. The problem here is that the password is displayed in the terminal -- even if you use ... user_pw PROMPT. But in the end you get a password-protected .pdf and you are prompted for the password when opening the file.

    Read the article

  • Tar and gzip together, but the other way round?

    - by Boldewyn
    Gzipping a tar file as whole is drop dead easy and even implemented as option inside tar. So far, so good. However, from an archiver's point of view, it would be better to tar the gzipped single files. (The rationale behind it is, that data loss is minified, if there is a single corrupt gzipped file, than if your whole tarball is corrupted due to gzip or copy errors.) Has anyone experience with this? Are there drawbacks? Are there more solid/tested solutions for this than find folder -exec gzip '{}' \; tar cf folder.tar folder

    Read the article

  • Good practice on Visual Studio Solutions

    - by JonWillis
    Hopefully a relativity simple question. I'm starting work on a new internal project to create tractability of repaired devices within the buildings. The database is stored remotely on a webserver, and will be accessed via web API (JSON output) and protected with OAuth. The front end GUI is being done in WPF, and the business code in C#. From this, I see the different layers Presentation/Application/Datastore. There will be code for managing all the authenticated calls to the API, class to represent entities (business objects), classes to construct the entities (business objects), parts for WPF GUI, parts of the WPF viewmodels, and so on. Is it best to create this in a single project, or split them into individual projects? In my heart I say it should be multiple projects. I have done it both ways previously, and found testing to be easier with a single project solution, however with multiple projects then recursive dependencies can crop up. Especially when classes have interfaces to make it easier to test, I've found things can become awkward.

    Read the article

< Previous Page | 125 126 127 128 129 130 131 132 133 134 135 136  | Next Page >