Search Results

Search found 27016 results on 1081 pages for 'entry point'.

Page 590/1081 | < Previous Page | 586 587 588 589 590 591 592 593 594 595 596 597  | Next Page >

  • Picking Core Language For Large Scale Web Platform

    - by ryanzec
    Now I have work with PHP and ASP.NET quite a bit and also played around few other language for web development. I am now at a point where need to start building a backend platform that will have the ability to support a large set of applications and I am trying to figure out which language I want to choose as my core language. When I say core language I mean the language that the majority of the backend code is going to be in. This is not to say that other languages won't be used because my guess is that they will but I want a large majority of the code (90%-98%) to be in 1 language. While I see to benefit of using the language that is best for the job, having 15% in php, 15% in ASP.NET, 5% in perl, 10% in python, 15% in ruby, etc… seems like a very bad idea to me (not to mention integrating everything seamlessly would probably add a bit of overhead). If you were going to be building a large scale web platform that need to support multiple applications from scratch, what would you choose as your core language and why?

    Read the article

  • OWA no longer accessing 1 backend exchange server

    - by Morchuboo
    We have IIS hosting OWA that is the web frontend to 3 backend exchange servers. Yesterday we got a lot of event 9791 warnings: "Cleanup of the DeliveredTo table for database 'Second Storage Group\Mailbox Store EUROPE 2' was pre-empted because the database engine's version store was growing too large. 0 entries were purged. At this point the server was crawling. Our Mail admin is currently away and not contactable so we rebooted the server. Everything seems ok when reading mail from outlook and evolution-mapi clients but OWA and active-sync connections cannot access. When logging into OWA, users whos mailboxes are not on this backend server are fine but users on this server can log into the OWA frontend but once submitting their credentials the page returns a 503 service unavailable error. We have since rebstarted the affected exchange server and the IIS server as well as iisreset /noforce but problem persists. Can anyone suggest what we should look at...

    Read the article

  • What is wrong with my logic for the divide and conquer algorithm for Closest pair problem?

    - by Programming Noob
    I have been following Coursera's course on Algorithms and came up with a thought about the divide/conquer algorithm for the closest pair problem, that I want clarified. As per Prof Roughgarden's algorithm (which you can see here if you're interested): For a given set of points P, of which we have two copies - sorted in X and Y direction - Px and Py, the algorithm can be given as closestPair(Px,Py): Divide points into left half - Q, and right half - R, and form sorted copies of both halves along x and y directions - Qx,Qy,Rx,Ry Let closestPair(Qx,Qy) be points p1 and q1 Let closestPair(Rx,Ry) be p2,q2 Let delta be minimum of dist(p1,q1) and dist(p2,q2) This is the unfortunate case, let p3,q3 be the closestSplitPair(Px,Py,delta) Return the best result Now, the clarification that I want is related to step 5. I should say this beforehand, that what I'm suggesting, is barely any improvement at all, but if you're still interested, read ahead. Prof R says that since the points are already sorted in X and Y directions, to find the best pair in step 5, we need to iterate over points in the strip of width 2*delta, starting from bottom to up, and in the inner loop we need only 7 comparisions. Can this be bettered to just one? How I think is possible seemed a little difficult to explain in plain text, so I drew a diagram and wrote it on paper and uploaded it here: Since no one else came up with is, I'm pretty sure there's some error in my line of thought. But I have literally been thinking about this for HOURS now, and I just HAD to post this. It's all that is in my head. Can someone point out where I'm going wrong?

    Read the article

  • My computer will not reboot after fresh install of ubuntu 12.04LTS

    - by user170715
    I bought a new computer yesterday and it came with Windows 8. When installing Ubuntu, i choose the erase and install option thinking that Ubuntu would install easily like it did for my old laptop... After a successful install and following the instructions telling me to reboot to finish installation and remove installation media. It worked and my computer booted fine, however once I began installing updates via update manager and activating additional driver {ATI/AMD proprietary FGLRX graphics driver (post-release updates)} out of the following: Experimental AMD binary Xorg driver and kernel module ATI/AMD proprietary FGLRX graphics driver (*experimental*beta) ATI/AMD proprietary FGLRX graphics driver (post-release updates) Then reboot to finish making changes I reboot and get an error (Reboot and select proper boot device) At this point I was stuck, so I eventually reinstalled ubuntu and repeated the exacted same steps until right before i rebooted to finish making changes. However this time i used this Boot Repair tool sudo add-apt-repository ppa:yannubuntu/boot-repair sudo apt-get update sudo apt-get install -y boot-repair boot-repair After running the program i get a "boot successfully repaired" message. Then I try to reboot again and get the GNU Grub screen where it says would you like to boot: normal recovery memorytest Once it begins loading, you see the code moving across the screen then it pauses when it gets to and doesnt do anything. If someone could tell me how to fix this or get Windows 8 back soon, I'd appreciate it because like i said i just bought it yesterday and now i cant even use it.

    Read the article

  • Stop Windows 7 from forcefully shutting itself down when waiting to update

    - by Kenneth
    I have had this happen to me twice now. I'll be in the middle of working on something that has resulted in me not rebooting my machine for several days. If Windows 7 is waiting to install updates it'll eventually get to a point where it forces a restart to install the updates. The problem is that it doesn't notify me or tell me that its going to do this and I've lost work as a result and most recently had a VM become corrupt and unusable. Is there a way to at least get it to prompt me before doing this? It used to...

    Read the article

  • Is it possible to upgrade using the Windows Server 2012 evaluation?

    - by Cerebrate
    I've got a Windows Server 2008 Standard installation here that I'm trying to upgrade to Windows Server 2012 Server, using the evaluation version. (The scenario is essentially that I need to test the upgrade, and specifically the upgrade process, before we spend the money on going ahead with the actual upgrade.) When I try to upgrade, it fails with the message: "Windows Server 2008 Standard cannot be upgraded to Windows Server 2012 Standard Evaluation (Server with a GUI). You can choose to install a new... (etc., etc.)" Is this (non-upgradability) a known limitation of the evaluation version? (Unfortunately, I haven't found a clear answer on this point.) And if not, any thoughts on where else I might look for the problem and solutions to it?

    Read the article

  • Passenger package not found after adding phusion repo Ubuntu 12.04

    - by speshak
    I'm trying to install the official Passenger (and Nginx) packages from Phusion on an Ubuntu 12.04.3 server. I have the following in /etc/apt/sources.list.d/passenger: deb https://oss-binaries.phusionpassenger.com/apt/passenger precise main Even after running apt-get update there is no passenger package found by apt. I did verify that the package info appears in /var/lib/apt/lists/oss-binaries.phusionpassenger.com_apt_passenger_dists_precise_main_binary-amd64_Packages but at this point I'm at a loss as to why the package isn't available via apt-get. There are some packages (libapache2-mod-passenger, passenger-docs) that are available. These packages seem to also exist in universe, but apt-cache show lists both locations.

    Read the article

  • Handling deleted users - separate or same table?

    - by Alan Beats
    The scenario is that I've got an expanding set of users, and as time goes by, users will cancel their accounts which we currently mark as 'deleted' (with a flag) in the same table. If users with the same email address (that's how users log in) wish to create a new account, they can signup again, but a NEW account is created. (We have unique ids for every account, so email addresses can be duplicated amongst live and deleted ones). What I've noticed is that all across our system, in the normal course of things we constantly query the users table checking the user is not deleted, whereas what I'm thinking is that we dont need to do that at all...! [Clarification1: by 'constantly querying', I meant that we have queries which are like: '... FROM users WHERE isdeleted="0" AND ...'. For example, we may need to fetch all users registered for all meetings on a particular date, so in THAT query, we also have FROM users WHERE isdeleted="0" - does this make my point clearer?] (1) continue keeping deleted users in the 'main' users table (2) keep deleted users in a separate table (mostly required for historical book-keeping) What are the pros and cons of either approach?

    Read the article

  • Why does my Mac address reset after reconnecting?

    - by Mr.Student
    I have ubuntu 12. I'm changing my mac address with ifconfig wlan0 hw ether xx:xx:xx:xx:xx:xx which works. However when I restart my connection my computer resets my mac to my original mac address. I'm guessing that this happens because something calls... ifconfig wlan0 down ... do something before connecting ifconfig wlan0 up ... connect to designated access point I want my mac address to however be the same no matter how many times I disconnect and reconnect, whether to another network or the same one. Also it would be nice to turn off the auto-connect feature for my network-manager with out having to edit each individual connection. Lastly I would like to know how to connect to a wifi network through the terminal and not via gui network manager ubuntu provides.

    Read the article

  • Installing my own XP VHD into Win 7.

    - by malcolms
    Hi, I am follwing instructions on this site to install my own XP VHD into Win 7 virtual PC (XP mode) http://www.windows7hacker.com/index.php/2009/09/how-to-make-your-own-vhd-to-xp-mode-or-even-vista-mode-ready/ Problem is when I click on install Integration Components on the Tools menu the setup does not start. And I do not understand this message that comes up because I can't start VM at this point at all. It says "If setup does not run automatically, open the CD-rom drive inside the virtual machine and run setup" So what does the mean when i cant run the VM?? Malcolm

    Read the article

  • How to pass one float as four unsigned chars to shader by glVertexPointAttrib?

    - by Kog
    For each vertex I use two floats as position and four unsigned bytes as color. I want to store all of them in one table, so I tried casting those four unsigned bytes to one float, but I am unable to do that correctly... All in all, my tests came to one point: GLfloat vertices[] = { 1.0f, 0.5f, 0, 1.0f, 0, 0 }; glEnableVertexAttribArray(0); glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 2 * sizeof(float), vertices); // VER1 - draws red triangle // unsigned char colors[] = { 0xff, 0, 0, 0xff, 0xff, 0, 0, 0xff, 0xff, 0, 0, // 0xff }; // glEnableVertexAttribArray(1); // glVertexAttribPointer(1, 4, GL_UNSIGNED_BYTE, GL_TRUE, 4 * sizeof(GLubyte), // colors); // VER2 - draws greenish triangle (not "pure" green) // float f = 255 << 24 | 255; //Hex:0xff0000ff // float colors2[] = { f, f, f }; // glEnableVertexAttribArray(1); // glVertexAttribPointer(1, 4, GL_UNSIGNED_BYTE, GL_TRUE, 4 * sizeof(GLubyte), // colors2); // VER3 - draws red triangle int i = 255 << 24 | 255; //Hex:0xff0000ff int colors3[] = { i, i, i }; glEnableVertexAttribArray(1); glVertexAttribPointer(1, 4, GL_UNSIGNED_BYTE, GL_TRUE, 4 * sizeof(GLubyte), colors3); glDrawArrays(GL_TRIANGLES, 0, 3); Above code is used to draw one simple red triangle. My question is - why do versions 1 and 3 work correctly, while version 2 draws some greenish triangle? Hex values are one I read by marking variable during debug. They are equal for version 2 and 3 - so what causes the difference?

    Read the article

  • Very high memory usage, but not claimed by any process?

    - by SharkWipf
    While stress-testing LVM on one of our Debian servers, I came across this issue where memory would fill up a lot to the point where it would run the server out of memory, but no process would claim the memory. See http://i.imgur.com/cLn5ZHS.png, and see http://serverfault.com/a/449102/125894 for an explanation on the colors used in htop. Why is this happening? And is there any way to see what process is using the memory? Htop is configured not to hide any processes, so what is it that htop is missing? In this particular case, I can fairly certainly say that it is caused, directly or indirectly, by lvmcreate, lvmremove or dmsetup, as I was stress-testing that. Do note that this question is not about solving the LVM problem, but about why the memory isn't claimed by any process. Stopping all LVM commands does bring the memory back down to <600MB.

    Read the article

  • Memory management (segmentation and paging) in 80286 and 80386: How does it work?

    - by Andrew J. Brehm
    I found lots of Web sites and books explaining how memory management worked on the 8086 and later x86 CPUs in Real Mode. I understand, I think, how two 16 bit values, segment address and offset are combined to get a linear 20 bit physical address (shift segment four bits to the left, add offset; segments are 64K and start every 16 bytes). But I couldn't find any good Web sites or books that explained how memory management works in Protected Mode, specifically the differences between 80286 and 80386. Can anyone point me to a good Web site or book (or explain it right here)? (For extra credit, i.e. an upvote, how does it work in Long Mode?)

    Read the article

  • How can I properly create /dev/dvd?

    - by chazomaticus
    Certain programs look for /dev/dvd by default to find DVDs. When I first boot my computer without a DVD inserted, /dev/dvd exists and points to the correct place (/dev/sr0). However, when I insert a DVD, /dev/dvd disappears. I'd like it to stick around so I don't have to navigate to /dev/sr0 in programs that are looking for DVDs. How do I ensure that the /dev/dvd symlink exists and points to the right place? It looks like I can add something to /etc/udev/rules.d/70-persistent-cd.rules. This site gives a couple of examples, but the 70-persistent-cd.rules file says "add the ENV{GENERATED}=1 flag to your own rules", which isn't part of the examples. The man 7 udev page is impenetrable to me, and I'm not convinced the linked page gives 100% of the information I need. So, what can I do on a modern, Ubuntu 12.04 (or later) system to make /dev/dvd always exist and point to the right device? EDIT: Is it as simple as adding ENV{GENERATED}=1 to the rules in the linked page, something like this: SUBSYSTEM=="block", KERNEL=="sr0", SYMLINK+="dvd", GROUP="cdrom", ENV{GENERATED}=1 Is that the right information for modern Ubuntu? What is ENV{GENERATED} doing there, when it wasn't generated, but hand-written?

    Read the article

  • Juggling with JDKs on Apple OS X

    - by Blueberry Coder
    I recently got a shiny new MacBook Pro to help me support our ADF Mobile customers. It is really a wonderful piece of hardware, although I am still adjusting to Apple's peculiar keyboard layout. Did you know, for example, that the « delete » key actually performs a « backspace »? But I disgress... As you may know, ADF Mobile development still requires JDeveloper 11gR2, which in turn runs on Java 6. On the other hand, JDeveloper 12c needs JDK 7. I wanted to install both versions, and wasn't sure how to do it.   If you remember, I explained in a previous blog entry how to install JDeveloper 11gR2 on Apple's OS X. The trick was to use the /usr/libexec/java_home command in order to invoke the proper JDK. In this case, I could have done the same thing; the two JDKs can coexist without any problems, since they install in completely different locations. But I wanted more than just installing JDeveloper. I wanted to be able to select my JDK when using the command line as well. On Windows, this is easy, since I keep all my JDKs in a central location. I simply have to move to the appropriate folder or type the folder name in the command I want to execute. Problem is, on OS X, the paths to the JDKs are... let's say convoluted.  Here is the one for Java 6. /System/Library/Java/JavaVirtualMachines/1.6.0.jdk/Contents/Home The Java 7 path is not better, just different. /Library/Java/JavaVirtualMachines/jdk1.7.0_45.jdk/Contents/Home Intuitive, isn't it? Clearly, I needed something better... On OS X, the default command shell is bash. It is possible to configure the shell environment by creating a file named « .profile » in a user's home folder. Thus, I created such a file and put the following inside: export JAVA_7_HOME=$(/usr/libexec/java_home -v1.7) export JAVA_6_HOME=$(/usr/libexec/java_home -v1.6) export JAVA_HOME=$JAVA_7_HOME alias java6='export JAVA_HOME=$JAVA_6_HOME' alias java7='export JAVA_HOME=$JAVA_7_HOME'  The first two lines retrieve the current paths for Java 7 and Java 6 and store them in two environment variables. The third line marks Java 7 as the default. The last two lines create command aliases. Thus, when I type java6, the value for JAVA_HOME is set to JAVA_6_HOME, for example.  I now have an environment which works even better than the one I have on Windows, since I can change my active JDK on a whim. Here a sample, fresh from my terminal window. fdesbien-mac:~ fdesbien$ java6 fdesbien-mac:~ fdesbien$ java -version java version "1.6.0_65" Java(TM) SE Runtime Environment (build 1.6.0_65-b14-462-11M4609) Java HotSpot(TM) 64-Bit Server VM (build 20.65-b04-462, mixed mode) fdesbien-mac:~ fdesbien$ fdesbien-mac:~ fdesbien$ java7 fdesbien-mac:~ fdesbien$ java -version java version "1.7.0_45" Java(TM) SE Runtime Environment (build 1.7.0_45-b18) Java HotSpot(TM) 64-Bit Server VM (build 24.45-b08, mixed mode) fdesbien-mac:~ fdesbien$ Et voilà! Maximum flexibility without downsides, just I like it. 

    Read the article

  • Functions that only call other functions. Is this a good practice?

    - by Eric C.
    I'm currently working on a set of reports that have many different sections (all requiring different formatting), and I'm trying to figure out the best way to structure my code. Similar reports we've done in the past end up having very large (200+ line) functions that do all of the data manipulation and formatting for the report, such that the workflow looks something like this: DataTable reportTable = new DataTable(); void RunReport() { reportTable = DataClass.getReportData(); largeReportProcessingFunction(); outputReportToUser(); } I would like to be able to break these large functions up into smaller chunks, but I'm afraid that I'll just end up having dozens of non-reusable functions, and a similar "do everything here" function whose only job is to call all these smaller functions, like so: void largeReportProcessingFunction() { processSection1HeaderData(); calculateSection1HeaderAverages(); formatSection1HeaderDisplay(); processSection1SummaryTableData(); calculateSection1SummaryTableTotalRow(); formatSection1SummaryTableDisplay(); processSection1FooterData(); getSection1FooterSummaryTotals(); formatSection1FooterDisplay(); processSection2HeaderData(); calculateSection1HeaderAverages(); formatSection1HeaderDisplay(); calculateSection1HeaderAverages(); ... } Or, if we go one step further: void largeReportProcessingFunction() { callAllSection1Functions(); callAllSection2Functions(); callAllSection3Functions(); ... } Is this really a better solution? From an organizational point of view I suppose it is (i.e. everything is much more organized than it might otherwise be), but as far as code readability I'm not sure (potentially large chains of functions that only call other functions). Thoughts?

    Read the article

  • How to do MVC the right way

    - by Ieyasu Sawada
    I've been doing MVC for a few months now using the CodeIgniter framework in PHP but I still don't know if I'm really doing things right. What I currently do is: Model - this is where I put database queries (select, insert, update, delete). Here's a sample from one of the models that I have: function register_user($user_login, $user_profile, $department, $role) { $department_id = $this->get_department_id($department); $role_id = $this->get_role_id($role); array_push($user_login, $department_id, $role_id); $this->db->query("INSERT INTO tbl_users SET username=?, hashed_password=?, salt=?, department_id=?, role_id=?", $user_login); $user_id = $this->db->insert_id(); array_push($user_profile, $user_id); $this->db->query(" INSERT INTO tbl_userprofile SET firstname=?, midname=?, lastname=?, user_id=? ", $user_profile); } Controller - talks to the model, calls up the methods in the model which queries the database, supplies the data which the views will display(success alerts, error alerts, data from database), inherits a parent controller which checks if user is logged in. Here's a sample: function create_user(){ $this->load->helper('encryption/Bcrypt'); $bcrypt = new Bcrypt(15); $user_data = array( 'username' => 'Username', 'firstname' => 'Firstname', 'middlename' => 'Middlename', 'lastname' => 'Lastname', 'password' => 'Password', 'department' => 'Department', 'role' => 'Role' ); foreach ($user_data as $key => $value) { $this->form_validation->set_rules($key, $value, 'required|trim'); } if ($this->form_validation->run() == FALSE) { $departments = $this->user_model->list_departments(); $it_roles = $this->user_model->list_roles(1); $tc_roles = $this->user_model->list_roles(2); $assessor_roles = $this->user_model->list_roles(3); $data['data'] = array('departments' => $departments, 'it_roles' => $it_roles, 'tc_roles' => $tc_roles, 'assessor_roles' => $assessor_roles); $data['content'] = 'admin/create_user'; parent::error_alert(); $this->load->view($this->_at, $data); } else { $username = $this->input->post('username'); $salt = $bcrypt->getSalt(); $hashed_password = $bcrypt->hash($this->input->post('password'), $salt); $fname = $this->input->post('firstname'); $mname = $this->input->post('middlename'); $lname = $this->input->post('lastname'); $department = $this->input->post('department'); $role = $this->input->post('role'); $user_login = array($username, $hashed_password, $salt); $user_profile = array($fname, $mname, $lname); $this->user_model->register_user($user_login, $user_profile, $department, $role); $data['content'] = 'admin/view_user'; parent::success_alert(4, 'User Sucessfully Registered!', 'You may now login using your account'); $data['data'] = array('username' => $username, 'fname' => $fname, 'mname' => $mname, 'lname' => $lname, 'department' => $department, 'role' => $role); $this->load->view($this->_at, $data); } } Views - this is where I put html, css, and JavaScript code (form validation code for the current form, looping through the data supplied by controller, a few if statements to hide and show things depending on the data supplied by the controller). <!--User registration form--> <form class="well min-form" method="post"> <div class="form-heading"> <h3>User Registration</h3> </div> <label for="username">Username</label> <input type="text" id="username" name="username" class="span3" autofocus> <label for="password">Password</label> <input type="password" id="password" name="password" class="span3"> <label for="firstname">First name</label> <input type="text" id="firstname" name="firstname" class="span3"> <label for="middlename">Middle name</label> <input type="text" id="middlename" name="middlename" class="span3"> <label for="lastname">Last name</label> <input type="text" id="lastname" name="lastname" class="span3"> <label for="department">Department</label> <input type="text" id="department" name="department" class="span3" list="list_departments"> <datalist id="list_departments"> <?php foreach ($data['departments'] as $row) { ?> <option data-id="<?php echo $row['department_id']; ?>" value="<?php echo $row['department']; ?>"><?php echo $row['department']; ?></option> <?php } ?> </datalist> <label for="role">Role</label> <input type="text" id="role" name="role" class="span3" list=""> <datalist id="list_it"> <?php foreach ($data['it_roles'] as $row) { ?> <option data-id="<?php echo $row['role_id']; ?>" value="<?php echo $row['role']; ?>"><?php echo $row['role']; ?></option> <?php } ?> </datalist> <datalist id="list_collection"> <?php foreach ($data['tc_roles'] as $row) { ?> <option data-id="<?php echo $row['role_id']; ?>" value="<?php echo $row['role']; ?>"><?php echo $row['role']; ?></option> <?php } ?> </datalist> <datalist id="list_assessor"> <?php foreach ($data['assessor_roles'] as $row) { ?> <option data-id="<?php echo $row['role_id']; ?>" value="<?php echo $row['role']; ?>"><?php echo $row['role']; ?></option> <?php } ?> </datalist> <p> <button type="submit" class="btn btn-success">Create User</button> </p> </form> <script> var departments = []; var roles = []; $('#list_departments option').each(function(i){ departments[i] = $(this).val(); }); $('#list_it option').each(function(i){ roles[roles.length + 1] = $(this).val(); }); $('#list_collection option').each(function(i){ roles[roles.length + 1] = $(this).val(); }); $('#list_assessor option').each(function(i){ roles[roles.length + 1] = $(this).val(); }); $('#department').blur(function(){ var department = $.trim($(this).val()); $('#role').attr('list', 'list_' + department); }); var password = new LiveValidation('password'); password.add(Validate.Presence); password.add(Validate.Length, {minimum: 10}); $('input[type=text]').each(function(i){ var field_id = $(this).attr('id'); var field = new LiveValidation(field_id); field.add(Validate.Presence); if(field_id == 'department'){ field.add(Validate.Inclusion, {within : departments}); } else if(field_id == 'role'){ field.add(Validate.Inclusion, {within : roles}) } }); </script> The codes above are actually code from the application that I'm currently working on. I'm working on it alone so I don't really have someone to review my code for me and point out the wrong things in it so I'm posting it here in hopes that someone could point out the wrong things that I've done in here. I'm also looking for some guidelines in writing MVC code like what are the things that should be and shouldn't be included in views, models and controllers. How else can I improve the current code that I have right now. I've written some really terrible code before(duplication of logic, etc.) that's why I want to improve my code so that I can easily maintain it in the future. Thanks!

    Read the article

  • M-Audio Delta starts up at wrong sample rate

    - by steevc
    When the PC starts my M-Audio Delta 66 is using 48000kHz sampling rate when it is set for 44100 in Envy24. This causes audio to play slower than it should. This is in Kubuntu 14.04 on my new PC using an AMD A8 6500 with 8GB. When I first installed it seemed okay, but at some point it went wrong and has been doing this consistently since then. Kernel is 3.13.0-24-generic #47-Ubuntu SMP Fri May 2 23:31:42 UTC 2014 i686 athlon i686 GNU/Linux steve@slarti:~$ cat /proc/asound/card2/pcm0p/sub0/hw_params access: MMAP_INTERLEAVED format: S32_LE subformat: STD channels: 10 rate: 48000 (48000/1) period_size: 441 buffer_size: 3528 I can get it to switch to 44100 if I disable/enable the Delta in Pulseaudio volume control, but I have to do this every day and the sound still seems distorted. I can't see any issues in any of the config or log files I can find. If I boot the PC with a Mint Live USB it starts at 44100 and sound fine. Originally reported on Youtube is playing at the wrong speed - maybe soundcard related, but I'll close that and have this more relevant question instead.

    Read the article

  • Adventures in Lab Management Configuration: CMMI Edition Part 1 of 3

    - by Enrique Lima
    I remember at one point someone telling me how close Migrate was to Migraine. This was a process that included an environment from TFS 2008 to TFS 2010, needed to be migrated too as far as the process template goes.  Here we are talking about CMMI v4.2 to CMMI v5.0.  Now, the process to migrate the TFS Infrastructure is one thing, migrating the Process Template is a different deal, not hard … just involved. Followed a combination of steps that came from a blog post as the main guidance and then MSDN (as suggested on the guidance post) to complement some tasks and steps. Again, the focus I have here is CMMI. The high level steps taken to enable the TFS 2008 CMMI v4.2 migrated to TFS 2010 Process Template are: 1)  Backup the Collection, Configuration and Warehouse Databases. 2)  Downloaded the Process Template using Visual Studio 2010. 3) Exported, modified and imported Bug Type Definition 4) Exported, modified and imported Scenario or Requirement Type Definition. 5) Created and imported bug field mappings. Now, we can attempt to connect using Test Manager, and you should be able to get this going. After that was done, it was time to enroll VMs that already existed in the environment.  This was a bit more challenging, but in the end it was a matter of just analyzing the changes that had been made to had a temporary work around from the time we migrated to the time we converted the Work Items and such and added fields to enable communication between the project and the Test and Lab Manager component. There are 2 more parts to this post, the second will describe the detailed steps taken to complete the Process Template update and the third will talk about the gotchas and fixes for the Lab Management portion.

    Read the article

  • Lost Root and other user passwords

    - by Webnet
    This isn't a huge deal, because there's very little on the server (literally a file or two) that we actually need off of it. But we disabled root logins as a security measure and can't remember any of our other user passwords. I'm assuming that there's nothing we can do at this point to get into the server? I'm sitting next to the box... Update Oops... actually, I need to export an SVN off of this server. So yeah, there's stuff I need.

    Read the article

  • New SPC2 benchmark- The 7420 KILLS it !!!

    - by user12620172
    This is pretty sweet. The new SPC2 benchmark came out last week, and the 7420 not only came in 2nd of ALL speed scores, but came in #1 for price per MBPS. Check out this table. The 7420 score of 10,704 makes it really fast, but that's not the best part. The price one would have to pay in order to beat it is ridiculous. You can go see for yourself at http://www.storageperformance.org/results/benchmark_results_spc2The only system on the whole page that beats it was over twice the price per MBPS. Very sweet for Oracle. So let's see, the 7420 is the fastest per $. The 7420 is the cheapest per MBPS. The 7420 has incredible, built-in features, management services, analytics, and protocols. It's extremely stable and as a cluster has no single point of failure. It won the Storage Magazine award for best NAS system this year. So how long will it be before it's the number 1 NAS system in the market? What are the biggest hurdles still stopping the widespread adoption of the ZFSSA? From what I see, it's three things: 1. Administrator's comfort level with older legacy systems. 2. Politics 3. Past issues with Oracle Support.   I see all of these issues crop up regularly. Number 1 just takes time and education. Number 3 takes time with our new, better, and growing support team. many of them came from Oracle and there were growing pains when they went from a straight software-model to having to also support hardware. Number 2 is tricky, but it's the job of the sales teams to break through the internal politics and help their clients see the value in oracle hardware systems. Benchmarks like this will help.

    Read the article

  • When can you call yourself good at language X?

    - by SoulBeaver
    This goes back to a conversation I've had with my girlfriend. I tried to tell her that I simply don't feel adequate enough in my programming language (C++) to call myself good. She then asked me, "Well, when do you consider yourself good enough?" That's an interesting question. I didn't know what to tell her. So I'm asking you. For any programming language, framework or the like, when do you reach a point were you sit back, look at what you've done and say, "Hey, I'm actually pretty good at this."? How do you define "good" so that you can tell others, honestly, "Yeah, I'm good at X". Additionally, do you reach these conclusions by comparing what others can do? Additional Info I have read the canonical paper on how it takes ten-thousand hours before you are an expert on the field. (Props to anybody that knows what this paper is called again) I have also read various articles from Coding Horror about interviewing people. Some people, it was said, "Cannot function outside of a framework." So they may be "good" for that framework, but not otherwise in the language. Is this true?

    Read the article

  • Changing Admin Site URL (actually port) - how?

    - by TomTom
    I have a new install of the band new SharePoint 2010. I use host header identified site collections for everything. By default the admin site is on a random port. I would like to move the admin site to port 80, for the server name. As all sites have coded names (for example "intranet", "projects") this would allow administration via the server name - which is easier as external access does not have to remember the port number. How do I do this? I already changed the default URL, but the site (application) is still wrongly mapped. I dont find anything to change the IIS settings in the admin site. I possibly just miss it - so can anyone point me in the right direction?

    Read the article

  • Best way to troubleshoot apache not starting?

    - by lowgain
    We have recently gotten a backup server to mirror all our data onto in case the primary server goes down. I've gotten all the sites data updated through rsync, and all the apache config and databases updated. Both machines are on Ubuntu 9 (9.04 on the primary, 9.10 on the backup). So everything seems synced up for the most part at this point (still need to figure out user syncing), and I try to start Apache. I get * Starting web server apache2 [fail] Nothing else indicating what the problem could be. I know I don't have enough info to expect a solution from you guys, so I'd just like to know where I can go from here to further investigate this issue. Would there be any error logs for this? Thanks!

    Read the article

  • H.264 Pricing confusion

    - by Jamie Taylor
    not sure if this is the right place but there seems to be other H.264 questions here, if not please point me to the right place! I am taking uploaded content and encoding it to H.264, I'm confused about the pricing though. Do I have to pay to host and stream this video on my server (in Ireland). I've tried looking around google but there doesn't seem to be a definitive answer. Some countries have exceptions, some pay $2500. What kind of information should I be looking at to understand the pricing model more? Thanks.

    Read the article

< Previous Page | 586 587 588 589 590 591 592 593 594 595 596 597  | Next Page >