Search Results

Search found 9117 results on 365 pages for 'systems analysis'.

Page 60/365 | < Previous Page | 56 57 58 59 60 61 62 63 64 65 66 67  | Next Page >

  • What is the default mount point on Linux systems (all)?

    - by Vijay Sharma
    My question is very basic in nature. Is the mount point for external media (like USB) always /media? Because in a Debian system, if I plug in any USB device that goes to the /media folder. So is it the case with all the other Linux flavors like Fedora, Ubuntu, etc. If a USB device is automatically mounted will it always go to the /media directory?

    Read the article

  • which is best smart automatic file replication solution for cloud storage based systems.

    - by TORr0t
    I am looking for a solution for a project i am working on. We are developing a websystem where people can upload their files and other people can download it. (similar to rapidshare.com model) Problem is, some files can be demanded much more than other files. The scenerio is like: I have uploaded my birthday video and shared it with all of my friend, I have uploaded it to myproject.com and it was stored in one of the cluster which has 100mbit connection. Problem is, once all of my friends want to download the file, they cant download it since the bottleneck here is 100mbit which is 15MB per second, but i got 1000 friends and they can only download 15KB per second. I am not taking into account that the hdd is serving same files. My network infrastrucre is as follows: 1 gbit server(client) and connected to 4 Nodes of storage servers that have 100mbit connection. 1gbit server can handle the 1000 users traffic if one of storage node can stream more than 15MB per second to my 1gbit (client) server and visitor will stream directly from client server instead of storage nodes. I can do it by replicating the file into 2 nodes. But i dont want to replicate all files uploadded to my network since it is costing much more. So i need a cloud based system, which will push the files into replicated nodes automatically when demanded to those files are high, and when the demand is low, they will delete from other nodes and it will stay in only 1 node. I have looked to gluster and asked in their irc channel that, gluster cant do such a thing. It is only able to replicate all the files or none of the files. But i need it the cluster software to do it automatically. Any solutions ? (instead of recommending me amazon s3) S

    Read the article

  • What are good systems for managing PHP/MySQL infrastructure?

    - by sbrattla
    I work in a company which is about to migrate most applications from in-house custom built Java/Tomcat applications to Drupal. Due to company policies, applications and websites need to run on in-house servers. This means that we need infrastructure for Drupal (PHP/MySQL) applications. This must have been solved a million times already. I believe this is what web-hosting companies does every day. Even though we work on a much smaller scale than web-hosting companies, i assume it would make sense to look at the task as if we're going to have an internal small-scale web-hosting company. This means that the guys in IT operations could be "responsible" for "offering" web-hosting, while developers could use these "services". We have three environments; dev(elopment), test and prod(uction). It would make sense that developers could log in to a system and create/edit/delete dev and test sites as they'd like. Production sites should be available through the same system, but only available to IT ops. We need to work with clusters of web servers, meaning that an administration system should be capable of creating/editing/deleting sites across multiple servers. I know there's no "this is it" answer to my question; but what would be a good place to start to get going with this? Apart from the actual hardware, what would be a good administration system for this?

    Read the article

  • lack of update in xen kernel has any relation to guest operating systems?

    - by austin powers
    Hi , due to these two problems I've coped http://serverfault.com/questions/133578/not-able-to-install-g-and-gcc-on-debian also there is another link but due to my low reputation I couldn't put it through. I just wondering whether the hosted operating system (XEN) has any relation to its guest operating system or not? and when I type uname -r on my VPS it shows : 2.6.18-164.9.1.el5xen where as my installed O/S on my vps is debian 5.04 regards.

    Read the article

  • What are the essential considerations for setting up systems in a location with unreliable power?

    - by dunxd
    I deal with a lot of remote offices located in parts of the world where the local grid power supply is unreliable. Power can go off anytime with no warning, with outages ranging from minutes to days Power fluctuation is wild, with spikes and brown outs Currently the offices will have some or all of the following: A generator, with an inverter, or some sort of manual switch A big UPS or battery array connecting a number of devices Several smaller APC UPS with computers attached Low cost Voltage Regulators sometimes connected between mains and UPS or device. I know that each of these things needs to be appropriately rated for the equipment to which it is connected (although I am not sure how to calculate the correct rating). The offices will generally have the following equipment (in varying quantities): some sort of internet connection device (VSAT router, ADSL modem, WiMax router) Cisco ASA 5505 firewall a bunch of PCs printers one server I don't seek to replace the advice of an electrician, but in some of these locations they only answer the questions you ask them, so I need to make sure I have enough understanding of the essentials to protect equipment from damage, and possibly get through some power cuts.

    Read the article

  • How is the 137GB limit counted in Virtual PC (two systems on one disk)?

    - by Nux
    I have a dual boot (Win7, XP) physical machine on my old computer which I want to virtualize and move to my new one. So I've uninstalled everything that I can and run shrink from RescueCD (used GParted). Now I have two about 80GiB partitions with a gap between them, so still this seem to be above the given limit. Still the resulting VHD (made with Disk2vhd) is much below the limit (about 110GiB) and both partitions are below the limit. So my question is - is it failing due to the limitations of disk size for VPC or is failing simply because it's a dual boot system. Or maybe it would work if I would move partitions to be close to each other (the gap between them is about 171GB and the whole physical disk is 1TB)?

    Read the article

  • My Notebook can't reboot after reinstall new operating systems without display driver installed?

    - by RawR Crew
    I have a small problem when I reinstalled my notebook with Windows 7 or Windows XP home edition. the problem is I can't reboot my system if I didn't install display driver (ATI-Radeon). I can't reboot my system because of the restart's button on shut down menu disappear, thats mean the system cant be rebooting. And when I install display driver, the reboot button in shutdown menu will be appear, thats mean I cant access reboot menu. I just want to know, Why does it happen? Does the display driver have influence with the reboot process?

    Read the article

  • What is Google Page Rank and Why is it So Important?

    What exactly is PageRank? It is basically a link analysis algorithm, which was influenced by citation analysis, which dates way back to the fifties, when it was conceived by Eugene Garfield and later on by Massimo Marchiori. This link analysis algorithm essentially gives set of hyperlinked documents, where they are weighed in numerical form, and are given a number assignment between zero to ten.

    Read the article

  • Can installing Linux or Grub make all of my operating systems slow?

    - by Geore Shg
    I installed Linux Mint 64-bit recently but it freezes up all the time. Not only that, but Grub wouldn't detect Windows. I thought it was just an issue with Linux Mint so I booted into Ubuntu - however that froze up as well. I downloaded Fedora to replace Linux Mint with, but that also freezes up. I finally gave up trying to repair Grub, burning a supergrub grub2 disk and using that to boot into Windows. Windows now freezes up as well. I installed a new copy of Windows on a different drive but to no avail. Right before all this started my computer was running very smoothly. I am wondering if the installation of Linux Mint, the reinstallation of Grub, or me messing with the BIOS (when I was attempting to repair Grub) could have done something drastic enough that everything is slow now? I realize that computers get gradually slower over time, but this was in no way gradual and it happened directly after the installation of Linux Mint. If so, what should I do?

    Read the article

  • Are eCommerce platforms worth it for large scale systems?

    - by codeflunky
    My company and I are building a new system for a relatively large client. We are going to be replacing their entire system, which includes some eCommerce aspects to it among many other things. It is not a typical public shopping site, and there are many things about the system (both back end and front end) that are quite different. Some of the people I work for are convinced that we should be using a third party product to implement the eCommerce pieces (shopping cart, catalog management). Their opinion is that it is a solved problem, and we shouldn't have to reinvent it. Given that direction, I have reviewed around ten different .NET based eCommerce platforms, and I struggle to imagine how we will be able to smoothly integrate any of them without a lot of friction. They are so all-encompassing that I feel like they are probably better suited for implementing simple shopping sites rather than larger systems that happen to have some eCommerce aspects to them. We have a really nice architecture planned for everything else (Entity Framework, ASP.NET MVC, etc.), and my gut is telling me that trying to introduce a third party platform will cause unnecessary fragmentation and difficulty. I would love to hear some opinions from people who have been there. Have you used a third party platform for eCommerce? Was it a typical shopping site or something different? Did you feel it was a help or a hinderance? Thanks.

    Read the article

  • Do you use Styrofoam balls to model your systems?

    - by Nick D
    [Objective-C] Do you still use Styrofoam balls to model your systems, where each ball represents a class? Tom Love: We do, actually. We've also done a 3D animation version of it, which we found to be nowhere near as useful as the Styrofoam balls. There's something about a physical, conspicuous structure hanging from the ceiling right in the middle of a development project that's regularly updated to provide not only the structure of the system that you're building, but also the current status of each one of the classes. We've done it on 19 projects the last time I've counted. One of them was 1,856 classes, which is big - actually, probably bigger than it should be. It was a big commercial project, so it needed to be somewhat big. Masterminds of Programming It is the first time I've read or heard about using styrofoam balls to model classes. Is that a commonly used technique? And, how does that sort of modeling help us to design better the system? If you have any photos to share which can show us how the classes are represented it'd be great!

    Read the article

  • so i got an econ degree...computing science or software systems (software engineering) degree ?

    - by sofreakinghigh
    okay so here's the story. i want to work in developing software (not QA or writing tests), so although I am currently starting computing science this summer, i came across Software Systems (aka s.e.) program which is "applied" but under computing science.... so what is the difference between the 2 disciplines ? if i choose software engineering, would it require more in depth expertise with calculus (i fail at it), and more coding time ? i am looking for a way to write better and more efficient code. I want to go to school, so i wont get lazy. i want to pick a program that would directly aid me in writing and developing software. graduating with an Econ degree in last year doesn't really help in landing jobs requiring comp sci/software engineering degrees....i should've studied harder in Economics (and maybe land a job) but i was obsessed with learning how to program with various languages since day 1 at University, but i didn't think i was smart enough to pass comp sci courses (so i just relied on books + irc...) and my parents said software jobs are being outsourced to India so i thought this obsession was just a "phase" and i should keep it as a hobby. but yes, it's quite funny why i hadn't pursued this field much earlier. as Joelonsoftware.com says economics degree starts with a bang (microeconomics the only course you only need really)....predicting stock prices (ridiculous!) + realizing China's potential power to meltdown US economy and vice versa + interest rate is inversely related to bond premium which is inversely related to stock market it would absolutely awesome if there was a program that combined finance + programming.

    Read the article

  • How security of the systems might be improved using database procedures?

    - by Centurion
    The usage of Oracle PL/SQL procedures for controlling access to data often emphasized in PL/SQL books and other sources as being more secure approach. I'v seen several systems where all business logic related with data is performed through packages, procedures and functions, so application code becomes quite "dumb" and is only responsible for visualization part. I even heard some devs call such approaches and driving architects as database nazi :) because all logic code resides in database. I do know about DB procedure performance benefits, but now I'm interested in a "better security" when using thick client model. I assume such design mostly used when Oracle (and maybe MS SQL Server) databases are used. I do agree such approach improves security but only if there are not much users and every system user has a database account, so we might control and monitor data access through standard database user security. However, how such approach could increase the security for an average web system where thick clients are used: for example one database user with DML grants on all tables, and other users are handled using "users" and"user_rights" tables? We could use DB procedures, save usernames into context use that for filtering but vulnerability resides at the root - if the main database account is compromised than nothing will help. Of course in a real system we might consider at least several main users (for example frontend_db_user, backend_db_user).

    Read the article

  • BizTalk: History of one project architecture

    - by Leonid Ganeline
    "In the beginning God made heaven and earth. Then he started to integrate." At the very start was the requirement: integrate two working systems. Small digging up: It was one system. It was good but IT guys want to change it to the new one, much better, chipper, more flexible, and more progressive in technologies, more suitable for the future, for the faster world and hungry competitors. One thing. One small, little thing. We cannot turn off the old system (call it A, because it was the first), turn on the new one (call it B, because it is second but not the last one). The A has a hundreds users all across a country, they must study B. A still has a lot nice custom features, home-made features that cannot disappear. These features have to be moved to the B and it is a long process, months and months of redevelopment. So, the decision was simple. Let’s move not jump, let’s both systems working side-by-side several months. In this time we could teach the users and move all custom A’s special functionality to B. That automatically means both systems should work side-by-side all these months and use the same data. Data in A and B must be in sync. That’s how the integration projects get birth. Moreover, the specific of the user tasks requires the both systems must be in sync in real-time. Nightly synchronization is not working, absolutely.   First draft The first draft seems simple. Both systems keep data in SQL databases. When data changes, the Create, Update, Delete operations performed on the data, and the sync process could be started. The obvious decision is to use triggers on tables. When we are talking about data, we are talking about several entities. For example, Orders and Items [in Orders]. We decided to use the BizTalk Server to synchronize systems. Why it was chosen is another story. Second draft   Let’s take an example how it works in more details. 1.       User creates a new entity in the A system. This fires an insert trigger on the entity table. Trigger has to pass the message “Entity created”. This message includes all attributes of the new entity, but I focused on the Id of this entity in the A system. Notation for this message is id.A. System A sends id.A to the BizTalk Server. 2.       BizTalk transforms id.A to the format of the system B. This is easiest part and I will not focus on this kind of transformations in the following text. The message on the picture is still id.A but it is in slightly different format, that’s why it is changing in color. BizTalk sends id.A to the system B. 3.       The system B creates the entity on its side. But it uses different id-s for entities, these id-s are id.B. System B saves id.A+id.B. System B sends the message id.A+id.B back to the BizTalk. 4.       BizTalk sends the message id.A+id.B to the system A. 5.       System A saves id.A+id.B. Why both id-s should be saved on both systems? It was one of the next requirements. Users of both systems have to know the systems are in sync or not in sync. Users working with the entity on the system A can see the id.B and use it to switch to the system B and work there with the copy of the same entity. The decision was to store the pairs of entity id-s on both sides. If there is only one id, the entities are not in sync yet (for the Create operation). Third draft Next problem was the reliability of the synchronization. The synchronizing process can be interrupted on each step, when message goes through the wires. It can be communication problem, timeout, temporary shutdown one of the systems, the second system cannot be synchronized by some internal reason. There were several potential problems that prevented from enclosing the whole synchronization process in one transaction. Decision was to restart the whole sync process if it was not finished (in case of the error). For this purpose was created an additional service. Let’s call it the Resync service. We still keep the id pairs in both systems, but only for the fast access not for the synchronization process. For the synchronizing these id-s now are kept in one main place, in the Resync service database. The Resync service keeps record as: ·       Id.A ·       Id.B ·       Entity.Type ·       Operation (Create, Update, Delete) ·       IsSyncStarted (true/false) ·       IsSyncFinished (true/false0 The example now looks like: 1.       System A creates id.A. id.A is saved on the A. Id.A is sent to the BizTalk. 2.       BizTalk sends id.A to the Resync and to the B. id.A is saved on the Resync. 3.       System B creates id.B. id.A+id.B are saved on the B. id.A+id.B are sent to the BizTalk. 4.       BizTalk sends id.A+id.B to the Resync and to the A. id.A+id.B are saved on the Resync. 5.       id.A+id.B are saved on the B. Resync changes the IsSyncStarted and IsSyncFinished flags accordingly. The Resync service implements three main methods: ·       Save (id.A, Entity.Type, Operation) ·       Save (id.A, id.B, Entity.Type, Operation) ·       Resync () Two Save() are used to save id-s to the service storage. See in the above example, in 2 and 4 steps. What about the Resync()? It is the method that finishes the interrupted synchronization processes. If Save() is started by the trigger event, the Resync() is working as an independent process. It periodically scans the Resync storage to find out “unfinished” records. Then it restarts the synchronization processes. It tries to synchronize them several times then gives up.     One more thing, both systems A and B must tolerate duplicates of one synchronizing process. Say on the step 3 the system B was not able to send id.A+id.B back. The Resync service must restart the synchronization process that will send the id.A to B second time. In this case system B must just send back again also created id.A+id.B pair without errors. That means “tolerate duplicates”. Fourth draft Next draft was created only because of the aesthetics. As it always happens, aesthetics gave significant performance gain to the whole system. First was the stupid question. Why do we need this additional service with special database? Can we just master the BizTalk to do something like this Resync() does? So the Resync orchestration is doing the same thing as the Resync service. It is started by the Id.A and finished by the id.A+id.B message. The first works as a Start message, the second works as a Finish message.     Here is a diagram the whole process without errors. It is pretty straightforward. The Resync orchestration is waiting for the Finish message specific period of time then resubmits the Id.A message. It resubmits the Id.A message specific number of times then gives up and gets suspended. It can be resubmitted then it starts the whole process again: waiting [, resubmitting [, get suspended]], finishing. Tuning up The Resync orchestration resubmits the id.A message with special “Resubmitted” flag. The subscription filter on the Resync orchestration includes predicate as (Resubmit_Flag != “Resubmitted”). That means only the first Sync orchestration starts the Resync orchestration. Other Sync orchestration instantiated by the resubmitting can finish this Resync orchestration but cannot start another instance of the Resync   Here is a diagram where system B was inaccessible for some period of time. The Resync orchestration resubmitted the id.A two times. Then system B got the response the id.A+id.B and this finished the Resync service execution. What is interesting about this, there were submitted several identical id.A messages and only one id.A+id.B message. Because of this, the system B and the Resync must tolerate the duplicate messages. We also told about this requirement for the system B. Now the same requirement is for the Resunc. Let’s assume the system B was very slow in the first response and the Resync service had time to resubmit two id.A messages. System B responded not, as it was in previous case, with one id.A+id.B but with two id.A+id.B messages. First of them finished the Resync execution for the id.A. What about the second id.A+id.B? Where it goes? So, we have to add one more internal requirement. The whole solution must tolerate many identical id.A+id.B messages. It is easy task with the BizTalk. I added the “SinkExtraMessages” subscriber (orchestration with one receive shape), that just get these messages and do nothing. Real design Real architecture is much more complex and interesting. In reality each system can submit several id.A almost simultaneously and completely unordered. There are not only the “Create entity” operation but the Update and Delete operations. And these operations relate each other. Say the Update operation after Delete means not the same as Update after Create. In reality there are entities related each other. Say the Order and Order Items. Change on one of it could start the series of the operations on another. Moreover, the system internals are the “black boxes” and we cannot predict the exact content and order of the operation series. It worth to say, I had to spend a time to manage the zombie message problems. The zombies are still here, but this is not a problem now. And this is another story. What is interesting in the last design? One orchestration works to help another to be more reliable. Why two orchestration design is more reliable, isn’t it something strange? The Synch orchestration takes all the message exchange between systems, here is the area where most of the errors could happen. The Resync orchestration sends and receives messages only within the BizTalk server. Is there another design? Sure. All Resync functionality could be implemented inside the Sync orchestration. Hey guys, some other ideas?

    Read the article

  • SSRS 2005 - Usability analysis - Is SSRS a good option for this scenario?

    - by Sach
    How practical is it to consider SSRS 2005 or SSRS 2008 as a reporting solution for a report that has to show reports with millions of records (records vary from 3 to 10 million)? Is there any threshold on the size of report in SSRS? How do I know that for a huge report, wheather SSRS will consume the whole memory and start paging the operations to disk or it will give a memory leak error? Even if I keep on increasing the memory how can I be sure that certain memory will be sufficient for such huge reports for the report server? All the above questions are haunting me because I have a dedicated report server with a decent hardware and OS configuration (8 processors, 8GB RAM, 64 bit OS and 64 bit SQL Server 2005). Still my report with around 2 million records is taking more than 6 minutes and going from one page to another takes 3 minutes!!! My datasource is on separate server and when I execute only the stored proc there, it returns the results in less than 2 minutes.

    Read the article

  • Know of any Java garbage collection log analysis tools?

    - by braveterry
    I'm looking for a tool or a script that will take the console log from my web app, parse out the garbage collection information and display it in a meaningful way. I'm starting up on a Sun Java 1.4.2 JVM with the following flags: -verbose:gc -XX:+PrintGCTimeStamps -XX:+PrintGCDetails The log output looks like this: 54.736: [Full GC 54.737: [Tenured: 172798K->18092K(174784K), 2.3792658 secs] 257598K->18092K(259584K), [Perm : 20476K->20476K(20480K)], 2.4715398 secs] Making sense of a few hundred of these kinds of log entries would be much easier if I had a tool that would visually graph garbage collection trends.

    Read the article

  • How to do a cost-benefit analysis for platform-level features?

    - by Callister Park
    I work on a development team that works closely with Product Managers. There is mutual agreement between the developers and Product Managers that there should be a business case behind every feature the development team builds. My question is, what is an effective way to make a business case for platform-level features that have higher up front cost but will provide long term benefits? For example, the development team would like to implement a plug-in framework. There is the higher up-front cost to implement a plug-in framework but delivering the subsequent features as plug-ins will be cheaper in the long run. This is obvious to everyone including the Product Managers. Is there a standard/simple way to express the cost-benefits? Is there a simple way to visualize it with a graph?

    Read the article

< Previous Page | 56 57 58 59 60 61 62 63 64 65 66 67  | Next Page >