Search Results

Search found 4050 results on 162 pages for 'requirements'.

Page 137/162 | < Previous Page | 133 134 135 136 137 138 139 140 141 142 143 144  | Next Page >

  • What is the most efficient way to handle points / small vectors in JavaScript?

    - by Chris
    Currently I'm creating an web based (= JavaScript) application thata is using a lot of "points" (= small, fixed size vectors). There are basically two obvious ways of representing them: var pointA = [ xValue, yValue ]; and var pointB = { x: xValue, y: yValue }; So translating my point a bit would look like: var pointAtrans = [ pointA[0] + 3, pointA[1] + 4 ]; var pointBtrans = { x: pointB.x + 3, pointB.y + 4 }; Both are easy to handle from a programmer point of view (the object variant is a bit more readable, especially as I'm mostly dealing with 2D data, seldom with 3D and hardly with 4D - but never more. It'll allways fit into x,y,z and w) But my question is now: What is the most efficient way from the language perspective - theoretically and in real implementations? What are the memory requirements? What are the setup costs of an array vs. an object? ... My target browsers are FireFox and the Webkit based ones (Chromium, Safari), but it wouldn't hurt to have a great (= fast) experience under IE and Opera as well...

    Read the article

  • How to synchronize access to many objects

    - by vividos
    I have a thread pool with some threads (e.g. as many as number of cores) that work on many objects, say thousands of objects. Normally I would give each object a mutex to protect access to its internals, lock it when I'm doing work, then release it. When two threads would try to access the same object, one of the threads has to wait. Now I want to save some resources and be scalable, as there may be thousands of objects, and still only a hand full of threads. I'm thinking about a class design where the thread has some sort of mutex or lock object, and assigns the lock to the object when the object should be accessed. This would save resources, as I only have as much lock objects as I have threads. Now comes the programming part, where I want to transfer this design into code, but don't know quite where to start. I'm programming in C++ and want to use Boost classes where possible, but self written classes that handle these special requirements are ok. How would I implement this? My first idea was to have a boost::mutex object per thread, and each object has a boost::shared_ptr that initially is unset (or NULL). Now when I want to access the object, I lock it by creating a scoped_lock object and assign it to the shared_ptr. When the shared_ptr is already set, I wait on the present lock. This idea sounds like a heap full of race conditions, so I sort of abandoned it. Is there another way to accomplish this design? A completely different way?

    Read the article

  • Can it be done?

    - by bzarah
    We are in design phase of a project whose goal is replatforming an ASP classic application to ASP.Net 4.0. The system needs to be entirely web based. There are several new requirements for the new system that make this a challenging project: The system needs to be database independent. It must, at version 1.0, support MS SQL Server, Oracle, MySQL, Postgres and DB2. The system must be able to allow easy reporting from the database by third party reporting packages. The system must allow an administrative end user to create their own tables in the database through the web based interface. The system must allow an administrative end user to design/configure a user interface (web based) where they can select tables and fields in the system (either our system's core tables or their own custom tables created in #3) The system must allow an administrative end user to create and maintain relationships between these custom created tables, and also between these tables and our system's core tables. The system must allow an administrative end user to create business rules that will enforce validation, show/hide UI elements, block certain actions based on the identity of specific users, specific user groups or privileges. Essentially it's a system that has some core ticket tracking functionality, but allows the end user to extend the interface, business rules and the database. Is this possible to build in a .Net, Web based environment? If so, what do you think the level of effort would be to get this done? We are currently a 6 person shop, with 2.5 full time developers.

    Read the article

  • Conceptual data modeling: Is RDF the right tool? Other solutions?

    - by paprika
    I'm planning a system that combines various data sources and lets users do simple queries on these. A part of the system needs to act as an abstraction layer that knows all connected data sources: the user shouldn't [need to] know about the underlying data "providers". A data provider could be anything: a relational DBMS, a bug tracking system, ..., a weather station. They are hooked up to the query system through a common API that defines how to "offer" data. The type of queries a certain data provider understands is given by its "offer" (e.g. I know these entities, I can give you aggregates of type X for relationship Y, ...). My concern right now is the unification of the data: the various data providers need to agree on a common vocabulary (e.g. the name of the entity "customer" could vary across different systems). Thus, defining a high level representation of the entities and their relationships is required. So far I have the following requirements: I need to be able to define objects and their properties/attributes. Further, arbitrary relations between these objects need to be represented: a verb that defines the nature of the relation (e.g. "knows"), the multiplicity (e.g. 1:n) and the direction/navigability of the relation. It occurs to me that RDF is a viable option, but is it "the right tool" for this job? What other solutions/frameworks do exist for semantic data modeling that have a machine readable representation and why are they better suited for this task? I'm grateful for every opinion and pointer to helpful resources.

    Read the article

  • PHP - Use isset inside function not working..?

    - by pnichols
    I have a PHP script that when loaded, check first if it was loaded via a POST, if not if GET['id'] is a number. Now I know I could do this like this: if(isset($_GET['id']) AND isNum($_GET['id'])) { ... } function isNum($data) { $data = sanitize($data); if ( ctype_digit($data) ) { return true; } else { return false; } } But I would like to do it this way: if(isNum($_GET['id'])) { ... } function isNum($data) { if ( isset($data) ) { $data = sanitize($data); if ( ctype_digit($data) ) { return true; } else { return false; } } else { return false; } } When I try it this way, if $_GET['id'] isn't set, I get a warning of undefined index: id... It's like as soon as I put my $_GET['id'] within my function call, it sends a warning... Even though my function will check if that var is set or not... Is there another way to do what I want to do, or am I forced to always check isset then add my other requirements..??

    Read the article

  • Programmatically Change Colors in C#

    - by Jason
    I am current working on a project where, as different users add text to a document, I would like the color of the text to change. Originally, I was using C#'s predefined color values and just putting the ones I wanted to use into an enum in my application and cycling through the colors as different users added annotations. This works fine, and I am okay with this solution. However, I also could have chosen to change the RGB values and derived colors that way. I'm curious about what type of algorithm would be good to change those values to get different sets of colors. This is more just an exercise of something I had thought about. To clarify a little bit, I don't want to just increment one of the color values (R, G, or B) because that wouldn't give me enough variety in my colors. But, I don't think it would also work to increment all three of equal amounts. I also have to watch out for repeating colors (up to a point). The requirements for my project anticipates, at most, 10 different reviewers.

    Read the article

  • grails services :: multiple projects

    - by naveen
    PROBLEM : I have multiple grails projects (lets say appA, appB and appC) : services to be precise I want to run them in a single grails-app.. probably a war deployment, how can i do this? REQUIREMENTS : I want this to be a single app since i am deploying it on cloud and i don't have enough memory to hold all these service instances individually. The reason for multiple grails project is scalability. So that if later on i want to run 10 instance of appA, 3 instance of appB, and 1 instance of aapC; i should be able to do that. EDIT : Can i use something like 0mq, will that be helpful in keeping the services separated from each other. How will i package my service? And reading the docs of 0mq seems that it can work with both inprocess and external process. Will async grails requests on HTTP work with 0mq in process/ external mq calls. Haven't used 0mq, but from the initial doc it seems to work. Need some experience calls in this scenario. Are there any other alternatives or mq alternatives?

    Read the article

  • Eclipc java,writting a program [closed]

    - by ghassar
    I have an important exercise for that i found in the internet please i need help in using eclipc java thanks i have to design, implement, test and document a Java program (a set of classes) for one of the following problem specifications: Problem 1 – Jubilee Estate Agency Property Management System A local Estate Agent would like a prototype system to keep track of properties that are offered for sale. The Estate Agent sells domestic and commercial properties. You will need to define classes that represent the Estate Agency System. You should design your system and the classes that you will need before starting coding. Your system must have a graphical user interface and be designed and developed using the object-oriented principles of the MVC architecture design pattern i.e. the user interface class must be separate from the other classes. The initial basic requirements for the system are as follows: • Include a list of domestic properties for sale that include details of: address, description, selling price, and number of rooms • Include a list of commercial properties for sale that include details of: address, description, selling price, and area in square metres • Enable the properties that are for sale to be viewed on the screen • Allow the customer to select one or more properties to be placed on a ‘viewing list’ so that the properties can be visited in person • Display on the screen the viewing list that shows the details of the properties chosen • Provide a basic search facility to find properties that are for sale in a particular price band and display the results • Enable a property to be marked as sold

    Read the article

  • Best Methods for Dynamically Creating New Objects

    - by frankV
    I'm looking for a method to dynamically create new class objects during runtime of a program. So far what I've read leads me to believe it's not easy and normally reserved for more advanced program requirements. What I've tried so far is this: // create a vector of type class vector<class_name> vect; // and use push_back (method 1) vect.push_back(*new Object); //or use for loop and [] operator (method 2) vect[i] = *new Object; neither of these throw errors from the compiler, but I'm using ifstream to read data from a file and dynamically create the objects... the file read is taking in some weird data and occasionally reading a memory address, and it's obvious to me it's due to my use/misuse of the code snippet above. The file read code is as follows: // in main ifstream fileIn fileIn.open( fileName.c_str() ); // passes to a separate function along w/ vector loadObjects (fileIn, vect); void loadObjects (ifstream& is, vector<class_name>& Object) { int data1, data2, data3; int count = 0; string line; if( is.good() ){ for (int i = 0; i < 4; i++) { is >> data1 >> data2 >> data3; if (data1 == 0) { vect.push_back(*new Object(data2, data3) ) } } } }

    Read the article

  • CSS challenge: Two background images, centered column with fixed with, min-height 100%

    - by laurent
    In a nutshell I need a CSS solution for the following requirements: Layout: One centered column with fixed width and a minimum height of 100% Two vertically repeated background images behind the centered column, one aligned to the left, one aligned to the right Cross browser compatibility A little more details Today a new requirement for my current web site project came up: A background image with gradients on the left and right side. The challenge is now to specify two different background images while keeping the rest of the layout spec. Unfortunately the (simple) layout somehow doesn't go with the two backgrounds. My layout is basically one centered column with fixed width: #main_container { margin: 0 auto; min-height: 100%; width: 800px; } Furthermore it's necessary to stretch the column to a minimum height of 100%, since there are quite some pages with only little content. The following CSS styles take care of that: html { height: 100%; } body { margin: 0; height: 100%; padding: 0; } So far so good - until the two background image issue arrived... I tried the following solutions Two absolute positioned divs behind the main container One image defined with the body, one with the html CSS class One image defined with the body, the other one with a large div begind the main container With either one of them, the dynamic height solution was ruined. Either the main container didn't stretch to 100% when it was too small, or the background remained at 100% when the content was actually longer

    Read the article

  • SQL Server Licensing in a VMware vSphere Cluster

    - by Helvick
    If I have SQL Server 2008 instances running in virtual machines on a VMware vSphere cluster with vMotion\DRS enabled so that the VM's can (potentially) run on any one of the physical servers in the cluster what precisely are the license requirements? For example assume that I have 4 physical ESX Hosts with dual physical CPU's and 3 separate single vCPU Virtual Machines running SQL Server 2008 running in that cluster. How many SQL Standard Processor licenses would I need? Is it 3 (one per VM) or 12 (one per VM on each physical host) or something else? How many SQL Enterprise Processor licenses would I need? Is it 3 (one per VM) or 8 (one for each physical CPU in the cluster) or, again, something else? The range in the list prices for these options goes from $17k to $200k so getting it right is quite important. Bonus question: If I choose the Server+CAL licensing model do I need to buy multiple Server instance licenses for each of the ESX hosts (so 12 copies of the SQL Server Standard server license so that there are enough licenses on each host to run all VM's) or again can I just license the VM and what difference would using Enterprise per server licensing make? Edited to Add Having spent some time reading the SQL 2008 Licensing Guide (63 Pages! Includes Maps!*) I've come across this: • Under the Server/CAL model, you may run unlimited instances of SQL Server 2008 Enterprise within the server farm, and move those instances freely, as long as those instances are not running on more servers than the number of licenses assigned to the server farm. • Under the Per Processor model, you effectively count the greatest number of physical processors that may support running instances of SQL Server 2008 Enterprise at any one time across the server farm and assign that number of Processor licenses And earlier: ..For SQL Server, these rule changes apply to SQL Server 2008 Enterprise only. By my reading this means that for my 3 VM's I only need 3 SQL 2008 Enterprise Processor Licenses or one copy of Server Enterprise + CALs for the cluster. By implication it means that I have to license all processors if I choose SQL 2008 Standard Processor licensing or that I have to buy a copy of SQL Server 2008 Standard for each ESX host if I choose to use CALs. *There is a map to demonstrate that a Server Farm cannot extend across an area broader than 3 timezones unless it's in the European Free Trade Area, I wasn't expecting that when I started reading it.

    Read the article

  • Partitioning recommendations for a Proxmox VM Server (OpenVZ)

    - by luison
    We are new to virtualization and we are planning to turn our online server into a virualized one, mainly for maintenance, backup and recovery improvements. Initially we would only have one real virtual system with load plus 1-3 copys for testing and recovering and maybe a small centralized syslog virtual machine. We would like, if possible the host machine to include an iptables plus rsync to back up to other machines and some other global security systems. Due to this and the offerings of our hosting supplier we are mainly considering Proxmox for its simplicity (we like the idea of its web admin panel) and as I also understand that the container approach of OpenVMZ systems may fit well resource wise with our setup. The base system comes with debian so we can personalise it to our requirements. Proxmox installations default installs an LVM partition for the VMs. Our doubts are with the fact of what would be the best partition structure for this considering that: we would like to have a mirror of the root partition we could boot from if required (our provider supports booting the system from another partition via control panel) we ideally would like to have a partition that could be shared among the VM systems. We still don't know if this is possible directly with OpenVMZ containers, otherwise we are considering doing this by sharing it via NFS on the host machine. we want to use the backup system available on the proxmox host administrator to programme VMs backups and then rsync it to another machine. With this based on a Linux Raid of aprox (750Gb) we are considering something like: ext3_1/ - (20Gb) ext3_2/bak_root - (20Gb) mostly unmounted, root partition sync LVM_1 /var/lib/vz - (390Gb) partition for virtual images LVM_2 /shared_data - (30Gb) LVM_3 /backups - (300Gb) where all backups would be allocated Our initial tests with Proxmox seem to have issues with snapshots backups like this, perhaps caused by the fact that they can not be done to another LVM partition (error: command 'lvcreate --size 1024M --snapshot --name vzsnap-ns204084.XXX.net-0 /dev/pve/LV' failed with exit code 5) in which case we might have to use a standart ext3 partition (but unsure if we can do this with the 4 primary partition limitations). Does this makes more or less sense? Would it be mad to for example write VMs /var/logs to a NFS mounted partition (on the host system)? Are their any other easier ways to mount host system partitions (or folders) to the VMs?

    Read the article

  • SQL Server Licensing in a VMware vSphere Cluster

    - by Helvick
    If I have SQL Server 2008 instances running in virtual machines on a VMware vSphere cluster with vMotion\DRS enabled so that the VM's can (potentially) run on any one of the physical servers in the cluster what precisely are the license requirements? For example assume that I have 4 physical ESX Hosts with dual physical CPU's and 3 separate single vCPU Virtual Machines running SQL Server 2008 running in that cluster. How many SQL Standard Processor licenses would I need? Is it 3 (one per VM) or 12 (one per VM on each physical host) or something else? How many SQL Enterprise Processor licenses would I need? Is it 3 (one per VM) or 8 (one for each physical CPU in the cluster) or, again, something else? The range in the list prices for these options goes from $17k to $200k so getting it right is quite important. Bonus question: If I choose the Server+CAL licensing model do I need to buy multiple Server instance licenses for each of the ESX hosts (so 12 copies of the SQL Server Standard server license so that there are enough licenses on each host to run all VM's) or again can I just license the VM and what difference would using Enterprise per server licensing make? Edited to Add Having spent some time reading the SQL 2008 Licensing Guide (63 Pages! Includes Maps!*) I've come across this: • Under the Server/CAL model, you may run unlimited instances of SQL Server 2008 Enterprise within the server farm, and move those instances freely, as long as those instances are not running on more servers than the number of licenses assigned to the server farm. • Under the Per Processor model, you effectively count the greatest number of physical processors that may support running instances of SQL Server 2008 Enterprise at any one time across the server farm and assign that number of Processor licenses And earlier: ..For SQL Server, these rule changes apply to SQL Server 2008 Enterprise only. By my reading this means that for my 3 VM's I only need 3 SQL 2008 Enterprise Processor Licenses or one copy of Server Enterprise + CALs for the cluster. By implication it means that I have to license all processors if I choose SQL 2008 Standard Processor licensing or that I have to buy a copy of SQL Server 2008 Standard for each ESX host if I choose to use CALs. *There is a map to demonstrate that a Server Farm cannot extend across an area broader than 3 timezones unless it's in the European Free Trade Area, I wasn't expecting that when I started reading it.

    Read the article

  • Using WSUS Admin Console from outside domain

    - by Nick
    Environment: I have a workstation on our primary domain. We have a primary WSUS Server that is the upstream server of 8 different testing domains. The Primary WSUS server is not part of any domain. Routing is configured between my workstation and the Primary WSUS server. I can RDP to the Primary WSUS sever without any problem. The router is configured to forward any any between my workstation and the Primary WSUS server. This WSUS server cannot be part of a domain due to external requirements (I can't change them) on the lab I work in. The version of WSUS is WSUS 3.0 SP 2 What I want to do: I need to connect to the WSUS server with the WSUS Admin console from my local workstation. The end goal is to connect via Powershell and manage with that. I also need to take what I do here and port it to the 8 test domains so I can manage those WSUS servers. The routing is all in place so I can talk to the servers, it's just connecting to the WSUS console that is causing problems. The problem: I cannot get my workstation to connect to the WSUS Console. I get one of the following errors depending on the setup. 1st error: Cannot connect to 'WSUS'. You do not have the permissions required to access this WSUS server. To connect to the server you must be a member of the WSUS Administrators or WSUS Reporters security groups I also get the warning 7012 from the event log that says the same thing. 2nd error: Cannot connect to 'WSUS'. The server may be using another port or different Secure Sockets Layer setting. What I have tried: So far I have configured IIS for Anonymous Authentication on both the WSUS Administration and ApiRemoting30 using an account will call WSUS_User. With this in place, I get the 1st error. When I do this though, the local WSUS Console cannot be used either. Reverting back to only Windows Authentication allows the local console to work, but the remote console now give the 2nd error. I have confirmed the port, and that there is no SSL in use (which is a policy that is pushed from above, that I cannot effect). I have placed WSUS_User in the groups mentioned above, but it still does not connect. I made sure WSUS_User has full access on C:\Program Files\Update Services and C:\Program Files\Update Services\WebServices I am not very familiar with the workings of WSUS or IIS, and have gone as far as I can figure out on my own. Googling these errors all take me to the same steps about Anonymous Authentication and configuring permissions on folders. Note: I have cross-posted this to StackOverflow as well.

    Read the article

  • All the Gear and No Idea: Suggestions for re-designing my home/office/entertainment network

    - by 5arx
    Help/ Advice/ Suggestions please: I have a load of kit that I love but which currently operate in disconnected, sometimes counter-productive way. Because I never really had a masterplan I just added these things one after another and connected them up in ad hoc ways. Since I bought my Macbook I've found I spend much less time on the MacPro that was until then my main machine. Perversely, as my job involves writing .Net software, I spend a lot of Mac time actually inside a Windows 7 VM. I stream media from the HP box to the PS3 and thus to the TV, but its not without its limitations/annoyances. We listen to each other's iTunes libraries but the music files are all over the place and it would be good to know they were all safely in one location (and fully backed up). I need to come up with a strategy that will allow me to use all the kit for work, play (recording live music, making tunes, iMovie work), pushing/streaming media to the TV and sharing files with my other half (she uses a Windows laptop and her iPod touch). Ideally I'd like to be able to work on any of the machines and have a shared homedrive that was visible to all machines so all my current files were synced up wherever i was. It would be great if I could access everything securely and quickly over the web. I'd also like to be able to set up a background backup process. The kit list thus far: Apple MacPro 8GB/3x250GB RAID0 + 1TB Apple MacBook Pro 13" 8GB/250GB - I spend a lot of my work time on a Windows 7 VM on this. Crappy Acer laptop (for children's use - iPlayer, watching movies/tv files) HP Proliant Server 4GB/80GB+160GB+300GB Sun Ultra 10 2 x 80GB (old, but in top-notch condition) PS3 160GB iPod Classic 2 x 8GB iPod Touch Observations: Part of the problem is our dual use of Windows and OS X - we can't go for a pure NT style roaming profile. Because the server is also used for hosting test/beta applications and a SQL Server db, it can't be dedicated to file serving. The two Macs really could do with sharing a roaming profile or similar. I'd love to be able to do something useful with the Ultra 10. My other half has been trying to throw it away for over five years now and regularly ask what function it serves in my study :-( I've got no shortage of 500GB external USB hard drives iMovie files are very large and ideally would be processed on a RAID system. Apple's TimeMachine isn't so great. If anyone could suggest all or part of a setup that would fulfil some of my requirements I'd be very grateful. I am willing to consider purchasing one or two more bits of kit (an Apple TV and a Squeezebox have been moted by friends) if they will help make efficiencies rather than add to the chaos and confusion. Thanks for looking.

    Read the article

  • What is a good layout for a somewhat advanced home network and storage solution?

    - by Shaun
    My home network/storage needs are changing and I am searching for some opinions and starting points on what a good network/storage layout would be that can serve my needs for a few years into the future. I think I have a decent starting point for equipment, but I am also willing to invest fairly heavily in a solution that can last me for a while. I am a bit of a tech nerd and I have a moderate tolerance for setup of the solution. I would prefer if maintenance of the system is somewhat low once it is setup, but I am willing to accept some tradeoffs. Existing equipment: Router - Netgear WNDR3700 (gigabit) Router - DLink Gamerlounge DGL-4300 (gigabit) Switch - 16 port Trendnet green switch (gigabit) Switch - 5 port Trendnet green (gigabit) Computer - i7-950 office computer (gigabit ethernet) Computer - Q6600 quad core media center, hooked up to TV, records shows (gigabit ethernet) Computer - Acer 1810T ultraportable laptop (gigabit and N ethernet) NAS - Intel SS4200-E (gigabit) External hard drive - 2TB WD Green drive (esata) All kinds of miscellaneous network connected TV, Bluray, Verizon network extender, HDhomerun TV tuners, etc. Requirements: -Robust backup solution for a growing collection of huge family picture files and personal files, around 1.5TB. (Including offsite backup) -Central location for all user's files, while also keeping them secure from each other. -Storage for terabytes of movie backups and recorded TV, and access to them from all computers (maybe around 4TB eventually) -Possibility to host files to friends and family easily Nice to have: -Backup of terabytes of movie backups Intriguing possibilities: -Capability to have users' Windows desktops and files look the same from all network computers I am not sure if the new Windows Home Server 2011 would fit into this well, if I need a domain server, how best to organize my backups, or how to most effectively use RAID. Currently I am simply backing up all computers to a RAID 1 on the NAS box, which I was thinking could prevent a situation where I reach for a backup and find that the disk is corrupt. One possibility that I am thinking about now is simply using my media center PC with a huge RAID of hard drives on which all files are stored. Pseudo-backup of all files would be present because of the RAID, but important files would also be backed up off site via carrying hard drives to work. But what if corruption seeps into the files and the corrupted data is then backed up? Does RAID protect against this? I really want to take next to zero risks with the irreplaceable files. I can handle some degree of risk with the movies and other files. I'm looking for critiques on this idea as well as other possibilities. To summarize, my goal is high functionality, media capable, and robust backup of irreplaceable files.

    Read the article

  • Setting up a fileserver, some questions?

    - by Tanax
    Recently I've become very interested in setting up a fileserver, mostly for home usage but also because of the fact that I live in 2 places, I need to be able to access my files from both homes. I have already done some research into this but I am unclear about some things. My requirements are the following; Needs to work on both Mac and PC(only using Windows atm on PC but could be good if it supports more OS's to make it futureproof in case I need Linux or something else) Need to be able to set up a folder/drive/network space to act as a link to a certain folder on the fileserver All files should only be stored on the fileserver, e.g. no "shared" folders like in Dropbox where files are stored on the client computer Would prefer it if folders are password protected or that I can somehow specify what users can access the fileserver's shares Fileserver's OS most likely have to be Windows due to other factors outside of being just a fileserver I've already kinda figured out that I will need to set up a VPN so that I can access my fileserver from outside the local network. Probably going to use OpenVPN. Question 1: How would I go about to set up a VPN server so that I can connect to my local network at the fileserver's location? I know that since I'm on a dynamic IP I will have to get some sort of dynamic DNS server - I've already checked into this and I'm fairly sure I know how to fix that. I also know that I will have to forward the port OpenVPN uses in my router. Question 2: How would I actually share the folders on the fileserver so that I can access them on my other computers? I've researched into Samba but I'm uncertain if it needs to be run on a Linux OS. I know that the clients connecting to it can be Windows for example but can the Samba "server" be run on Windows? Also it appears that Samba shares a folder, meaning it works like Dropbox - I don't want that. So how would I share a folder in that case to make it work like I want it to? Sorry for the incredibly long question, I tried to structure it the best I could for easier read. Thanks in advance!

    Read the article

  • Setting up a fileserver, some questions?

    - by Tanax
    Recently I've become very interested in setting up a fileserver, mostly for home usage but also because of the fact that I live in 2 places, I need to be able to access my files from both homes. I have already done some research into this but I am unclear about some things. My requirements are the following; Needs to work on both Mac and PC(only using Windows atm on PC but could be good if it supports more OS's to make it futureproof in case I need Linux or something else) Need to be able to set up a folder/drive/network space to act as a link to a certain folder on the fileserver All files should only be stored on the fileserver, e.g. no "shared" folders like in Dropbox where files are stored on the client computer Would prefer it if folders are password protected or that I can somehow specify what users can access the fileserver's shares Fileserver's OS most likely have to be Windows due to other factors outside of being just a fileserver I've already kinda figured out that I will need to set up a VPN so that I can access my fileserver from outside the local network. Probably going to use OpenVPN. Question 1: How would I go about to set up a VPN server so that I can connect to my local network at the fileserver's location? I know that since I'm on a dynamic IP I will have to get some sort of dynamic DNS server - I've already checked into this and I'm fairly sure I know how to fix that. I also know that I will have to forward the port OpenVPN uses in my router. Question 2: How would I actually share the folders on the fileserver so that I can access them on my other computers? I've researched into Samba but I'm uncertain if it needs to be run on a Linux OS. I know that the clients connecting to it can be Windows for example but can the Samba "server" be run on Windows? Also it appears that Samba shares a folder, meaning it works like Dropbox - I don't want that. So how would I share a folder in that case to make it work like I want it to? Sorry for the incredibly long question, I tried to structure it the best I could for easier read. Thanks in advance!

    Read the article

  • Exchange server not serving mobile devices - how to troubleshoot?

    - by chickeninabiscuit
    Our exchange server has suddenly stopped serving mobile devices. Attempts to connect result in our ActiveSync server returning HTTP 500. It is serving outlook clients fine. Our server is Windows 2003 SBS 6.5 SP2 There are no abnormal events in the system log. I ran the "Exchange ActiveSync with AutoDiscover" at https://www.testexchangeconnectivity.com/ I've notice an abnormality in the exchange properties, Log File Directory shows: Access denied. Facility: Win32 ID no: 80070005 Exchange System Manager As shown in the following image: I think it may be related to a recent issue we had here: http://serverfault.com/questions/40222/windows-server-2003-suddenly-unable-to-connect-to-anything We followed a procedure to reinstall TCP/IP: http://support.microsoft.com/kb/325356 I've run the "exchange activesync" connectivity test at testexchangeconnectivity.com: Attempting to Resolve the host name mail.immersive.com.au in DNS. Host successfully Resolved Additional Details IP(s) returned: 221.133.203.229 Testing TCP Port 443 on host mail.immersive.com.au to ensure it is listening/open. The port was opened successfully. Testing SSL Certificate for validity. The certificate passed all validation requirements. Test Steps Validating certificate name Successfully validated the certificate name Additional Details Found hostname mail.immersive.com.au in Certificate Subject Common name Validating certificate trust for Windows Mobile Devices Certificate is trusted and all certificates are present in chain Additional Details Certificate is trusted for Windows Mobile 5 and Later platforms. Root = [email protected], CN=Thawte Server CA, OU=Certification Services Division, O=Thawte Consulting cc, L=Cape Town, S=Western Cape, C=ZA Testing certificate date to ensure validity Date Validation passed. The certificate is not expired. Additional Details Certificate is valid: NotBefore = 1/5/2009 4:00:00 PM, NotAfter = 1/11/2010 3:59:59 PM Testing Http Authentication Methods for URL https://mail.immersive.com.au/Microsoft-Server-Activesync/ Http Authentication Methods are correct Additional Details Found all expected authentication methods and no disallowed methods. Methods Found: Basic Attempting an Activesync session with server Errors were encountered while testing the ActiveSync session Test Steps Attempting to send OPTIONS command to server OPTIONS response was successfully received and is valid Additional Details Headers received: MicrosoftOfficeWebServer: 5.0_Pub Pragma: no-cache Public: OPTIONS, POST Allow: OPTIONS, POST MS-Server-ActiveSync: 6.5.7638.1 MS-ASProtocolVersions: 1.0,2.0,2.1,2.5 MS-ASProtocolCommands: Sync,SendMail,SmartForward,SmartReply,GetAttachment,GetHierarchy,CreateCollection,DeleteCollection,MoveCollection,FolderSync,FolderCreate,FolderDelete,FolderUpdate,MoveItems,GetItemEstimate,MeetingResponse,ResolveRecipients,ValidateCert,Provision,Search,Notify,Ping Content-Length: 0 Date: Thu, 16 Jul 2009 01:07:27 GMT Server: Microsoft-IIS/6.0 X-Powered-By: ASP.NET Attempting FolderSync command on ActiveSync session FolderSync command test failed Tell me more about this issue and how to resolve it Additional Details Exchange

    Read the article

  • In search of a network file system with extended caching to speed up file access

    - by Brecht Machiels
    I'm running a small home server that stores my documents. The disks in this server are in a RAID 1 configuration (using Linux md) and it's also periodically being backup up to an external hard drive to make sure I don't lose them. However, I'm always accessing the files from other computers on the home network using an SMB share, and this results in a considerable speed penalty (especially when connected over WLAN). This is quite annoying when editing large files, such as digital camera RAWs, for example. I've been looking for a solution to this problem. It would have to offer some kind of local caching to speed up the file access. The client would preferably not keep a copy of all data on the server, as it consists of a very large collection of photographs, most of which I will not access frequently. Instead, it should only cache the accessed files and sync the changes back in the background. Ideally, it would also do some smart read-ahead (cache the files that are in the same directory as the currently opened file, for examples), but I suppose that's asking a bit much. Synchronization should be automatic (on file change). Conflicting file changes (at the same time on different clients) are unlikely to happen in my use case, but I would prefer if they are handled properly (notification to the user). I've come across the following options, so far: something similar to Dropbox. iFolder seems to be the only thing that comes close, but its reputation (stability) and requirements put me off. A distributed file system such as OpenAFS. I'm not sure this will speed up file access. It is probably overkill for what I need. Maybe NFS or even Samba offer these possibilities. I read a bit about Windows' Offline Files, but its operation seems limited (at least on Windows XP). As this is just for personal use, I'm not willing to spend a lot of money. A free solution would be preferred. Also, the server needs to run on Linux, and I need a client for at least Windows.

    Read the article

  • Splitting an internet connection between multiple separate subnetworks

    - by pythonian4000
    Problem I have an internet connection that I want to split between four separate networks. My requirements are: I need to be able to monitor the amount of bandwidth and data being used by each network, and notify or control as necessary. The four networks should only be able to connect to the internet, not each other. My parents need to be able to operate it, so it needs a simple, preferably Windows-based GUI. Progress so far Server I have a mini-ITX server with six Gigabit ethernet ports - one for the ethernet internet connection, one for each of the four networks, and one for remote access to the server for administration. Bandwidth control I spent a long time researching solutions here. The majority of the control systems/software I found could control bandwidth usage via QOS, but could not monitor or control the amount of data being used. Eventually I found the SoftPerfect Bandwidth Manager, which has everything I need in terms of monitoring and control - per-interface quota management, usage statistics, a web interface for checking usage, and email notifications when quotas are exceeded. It is also Windows-based and has a simple GUI. Internet sharing This is where I am having issues. I am currently using Windows XP Pro SP2 for the server (yes, I know this is far from ideal, but it's the only spare Windows OS I currently have). I can't use the built-in Internet Connection Sharing for several reasons: The upstream internet router has an IP of 192.168.0.1 which ICS clashes with, and I cannot change the router settings. ICS can only share an internet connection with a single interface, but I have four. I have tried bridging the four network cards, but then the Bandwidth Manager cannot see the four individual interfaces - it only sees the bridge. I have tried setting up Dual DHCP DNS server (and am having issues getting DHCP offers to be received by clients), but that would still require gateway software of some sort, which I have been unable to find. My current attempt is to use OpenVPN, with a server for the internet NIC and a separate client for each of the four networks. My thought is that I could bridge the OpenVPN TAP devices to each NIC, meaning that the Bandwidth Manager would control traffic from the bridge instead of the interface. I have not made much progress here though - I've never used OpenVPN before. Questions Is there a Windows software package that does everything I need? (Unlikely, I know) Is there a Windows software package that will share internet between multiple NICs without bridging? Are either of my about attempts feasible? Would it help to have a newer/server version of Windows? Is there a non-Windows alternative that is easy to use?

    Read the article

  • SQL Server High Availability - Mirroring with MSCS?

    - by David
    I'm looking at options for high-availability for my SQL Server-powered application. The requirements are: HA protection from storage failure. Data accessibility when one of the DB servers is undergoing software updates (e.g. planned outage for Windows Update / SQL Server service-packs). Must not involve much in the way of hardware procurement. The application is an ASP.NET web application. The web application's users have their own database instances. I've seen two main options: SQL Server failover clustering, and SQL Server mirroring. I understand that SQL Server Failover Clustering requires the purchasing of a shared disk array and doesn't offer any protection if the shared storage goes down (so the documentation recommends to set up a Mirroring between two clusters). Database Mirroring seems the cheaper option (as it only requires two database servers and a simple witness box) - but I've heard it doesn't work well when you have a large number of databases. The application I'm developing involves giving each client their own database for their application - there could be hundreds of databases. Setting up the mirroring is no problem thanks to the automation systems we have in place. My final point concerns how failover works with respect to client connections - SQL Server Failover Clustering uses MSCS which means that the cluster is invisible to clients - a connection attempt might fail during the failover, but a simple reconnect will have it working again. However mirroring, as far as I know, requires that the client be aware of the mirrored partners: if the client cannot connect to the primary server then it tries the secondary server. I'm wondering how this work with respect to Connection Pooling in ASP.NET applications - does the client connection failovering mean that there's a potential 2-second (assuming 2000ms TCP timeout policy) pause when the connection pool tries the primary server on every connection attempt? I read somewhere that Mirroring can be used on top of MSCS which means that the client does not need to be aware of mirroring (so there wouldn't be any potential delays during connection, and also that no changes would need to be made to the client, not even the connection string) - however I'm finding it hard to get documentation or white papers on this approach. But if true, then it means the best method is then Mirroring (for HA) with MSCS (for client ignorance and connection performance). ...but how does this scale to a server instance that might contain hundreds of mirrored databases?

    Read the article

  • Connecting a 2560x1440 display to a laptop?

    - by tjollans
    Having read Jeff Atwood's blog post on Korean 27" IPS LCDs, I've been wondering to what extent these are useful in a notebook + large display situation. I own a Lenovo Thinkpad Edge E320 with 2nd gen. integrated Intel graphics. According to the spec from Intel, this should support HDMI version 1.4, and, using DisplayPort, resolutions up to 2560x1600. HDMI version 1.4 supports resolutions up to 4096×2160, however, according to c't (German), the HDMI interface used with Intel chips only supports 1920x1200. The same goes for the DVI output - dual-link DVI-D, apparently, is not supported by Intel. It would appear that my laptop cannot digitally drive this kind of resolution. Now what about other laptops? According to the article in c't above, AMD's integrated graphics chips have the same limitation as Intel's. NVIDIA graphics cards, apparently, only offer resolutions up to 1900x1200 over HDMI out of the box, but it's possible, when using Linux at least, to trick the driver into enabling higher resolutions. Is this still true? What's the situation on Windows and OSX? I found no information on whether discrete AMD chips support ultra-high resolutions over HDMI. Owners of laptops with (Mini) DisplayPort / Thunderbolt won't have any issues with displays this large, but if you're planning to go for a display with dual-link DVI-D input only (like the Korean ones), you're going to need an adapter, which will set you back something like €70-€100 (since the protocols are incompatible). The big question mark in this equation is VGA: a lot of laptops have it, and I don't see any reason to think this resolution is not supported by the hardware (an oft-quoted figure appears to be 2048x1536@75Hz, so 2560x1440@60Hz should be possible, right?), but are the drivers likely to cause problems? Perhaps more critically, you'd need a VGA to dual-link DVI-D adapter that converts analog to digital signals. Do these exist? How good are they? How expensive are they? Is there a performance penalty involved? Please correct me if I'm wrong on any points. In summary, what are the requirements on a laptop to drive an external LCD at 2560x1440, in particular one that supports dual-link DVI-D only, and what tools and adapters can be used to lower the bar?

    Read the article

  • What "pieces" are needed in order to set up a cluster of physical servers?

    - by Chris Dutrow
    Background: Currently, we use Rackspace cloud servers. We have no intention to stop using them, but would like to look into setting up a cluster of physical servers (probably desktop computers in the $400 range with 8gb memory each) to offset some of our load and work as a secondary, more powerful, less reliable system. To put things in perspective, we can buy comparable desktop computers for the same price as we pay in one month to rent them on Rackspace Cloud. I understand that this is generally a dumb idea. However, in this particular instance, the server cluster is needed for its computation power. It is not mission-critical, it does not host a consumer-facing website, and if it goes down for a day or two, its not really a problem. Currently, we have access to business class verizon fios. If I understand correctly, we can get at least 25 dedicated IP addresses with this service, this should be enough. Requirements: Each server runs Linux Centos 6.3 Some of the servers run Python and execute processes from a task queue (Redis or RabbitMQ) Some of the servers are capable of serving static files and Python driven REST APIs Some of the servers host a Cassandra database cluster One or more of the servers are a Redis database servers One or more of the servers are PostgreSQL servers Questions: What kind of router or switch is needed? We would like the computers to be able to communicate effectively with each other via internal IP addresses. This is especially important for communicating with servers hosting Redis that need to be able to respond to requests very quickly. Are there special switches or routers that need to be used to connect the servers together? Are Desktop computers ok for this? We have found that we are mostly RAM-bottle necked, I understand that some servers have highly superior CPUs, but I'm not sure we need CPU power as much as we need RAM, which is cheap in Desktop computers. Will we have problems with the WIFI cards in the desktops or any other unexpected hardware limitation? What tools should be used to "image" the servers. For example, when we get an installation right for a Redis server or Cassandra node, are there tools that come with Linux Centos 6.3 to image the server to a USB drive or something like that? Or do we need to use some other software for this? What other things are we missing that we should be concerned about? Thanks so much!

    Read the article

  • I/O intensive MySql server on Amazon AWS

    - by rhossi
    We recently moved from a traditional Data Center to cloud computing on AWS. We are developing a product in partnership with another company, and we need to create a database server for the product we'll release. I have been using Amazon Web Services for the past 3 years, but this is the first time I received a spec with this very specific hardware configuration. I know there are trade-offs and that real hardware will always be faster than virtual machines, and knowing that fact forehand, what would you recommend? 1) Amazon EC2? 2) Amazon RDS? 3) Something else? 4) Forget it baby, stick to the real hardware Here is the hardware requirements This server will be focused on I/O and MySQL for the statistics, memory size and disk space for the images hosting. Server 1 I/O The very main part on this server will be I/O processing, FusionIO cards have proven themselves extremely efficient, this is currently the best you can have in this domain. o Fusion ioDrive2 MLC 365GB (http://www.fusionio.com/load/-media-/1m66wu/docsLibrary/FIO_ioDrive2_Datasheet.pdf) CPU MySQL will use less CPU cores than Apache but it will use them very hard, the E7 family has 30M Cache L3 wichi provide boost performance : o 1x Intel E7-2870 will be ok. Storage SAS will be good enough in terms of performance, especially considering the space required. o RAID 10 of 4 x SAS 10k or 15k for a total available space of 512 GB. Memory o 64 GB minimum is required on this server considering the size of the statistics database. Warning: the statistics database will grow quickly, if possible consider starting with 128 GB directly, it will help. This server will be focused on I/O and MySQL for the statistics, memory size and disk space for the images hosting. Server 2 I/O The very main part on this server will be I/O processing, FusionIO cards have proven themselves extremely efficient, this is currently the best you can have in this domain. o Fusion ioDrive2 MLC 365GB (http://www.fusionio.com/load/-media-/1m66wu/docsLibrary/FIO_ioDrive2_Datasheet.pdf) CPU MySQL will use less CPU cores than Apache but it will use them very hard, the E7 family has 30M Cache L3 wichi provide boost performance : o 1x Intel E7-2870 will be ok. Storage SAS will be good enough in terms of performance, especially considering the space required. o RAID 10 of 4 x SAS 10k or 15k for a total available space of 512 GB. Memory o 64 GB minimum is required on this server considering the size of the statistics database. Warning: the statistics database will grow quickly, if possible consider starting with 128 GB directly, it will help. Thanks in advance. Best,

    Read the article

< Previous Page | 133 134 135 136 137 138 139 140 141 142 143 144  | Next Page >