Search Results

Search found 30117 results on 1205 pages for 'thread specific storage'.

Page 60/1205 | < Previous Page | 56 57 58 59 60 61 62 63 64 65 66 67  | Next Page >

  • Plug and Go NAS Storage

    - by graham.reeds
    My wife and I are separating. One of the things we need to extricate is the media we have accumulated over the years. So I am looking for a NAS solution that is a) relatively low-cost, b) reliable and c) easy for a non-geek to use (I don't want to be tech support). All it needs to do is hold our iTunes library, photos, course work and maybe some movies and TV shows that I currently have. She will be connecting via her Netbook. I have seen this thread but the reviews on Amazon aren't particularly favourable. Due to the need for simplicity, WHS and FreeNAS are none-starters. I need redundancy as if a single drive system was to die then she would lose her course work and photos. Is the ReadyNAS the only real solution out there?

    Read the article

  • Optimal storage of data structure for fast lookup and persistence

    - by Mikael Svenson
    Scenario I have the following methods: public void AddItemSecurity(int itemId, int[] userIds) public int[] GetValidItemIds(int userId) Initially I'm thinking storage on the form: itemId -> userId, userId, userId and userId -> itemId, itemId, itemId AddItemSecurity is based on how I get data from a third party API, GetValidItemIds is how I want to use it at runtime. There are potentially 2000 users and 10 million items. Item id's are on the form: 2007123456, 2010001234 (10 digits where first four represent the year). AddItemSecurity does not have to perform super fast, but GetValidIds needs to be subsecond. Also, if there is an update on an existing itemId I need to remove that itemId for users no longer in the list. I'm trying to think about how I should store this in an optimal fashion. Preferably on disk (with caching), but I want the code maintainable and clean. If the item id's had started at 0, I thought about creating a byte array the length of MaxItemId / 8 for each user, and set a true/false bit if the item was present or not. That would limit the array length to little over 1mb per user and give fast lookups as well as an easy way to update the list per user. By persisting this as Memory Mapped Files with the .Net 4 framework I think I would get decent caching as well (if the machine has enough RAM) without implementing caching logic myself. Parsing the id, stripping out the year, and store an array per year could be a solution. The ItemId - UserId[] list can be serialized directly to disk and read/write with a normal FileStream in order to persist the list and diff it when there are changes. Each time a new user is added all the lists have to updated as well, but this can be done nightly. Question Should I continue to try out this approach, or are there other paths which should be explored as well? I'm thinking SQL server will not perform fast enough, and it would give an overhead (at least if it's hosted on a different server), but my assumptions might be wrong. Any thought or insights on the matter is appreciated. And I want to try to solve it without adding too much hardware :) [Update 2010-03-31] I have now tested with SQL server 2008 under the following conditions. Table with two columns (userid,itemid) both are Int Clustered index on the two columns Added ~800.000 items for 180 users - Total of 144 million rows Allocated 4gb ram for SQL server Dual Core 2.66ghz laptop SSD disk Use a SqlDataReader to read all itemid's into a List Loop over all users If I run one thread it averages on 0.2 seconds. When I add a second thread it goes up to 0.4 seconds, which is still ok. From there on the results are decreasing. Adding a third thread brings alot of the queries up to 2 seonds. A forth thread, up to 4 seconds, a fifth spikes some of the queries up to 50 seconds. The CPU is roofing while this is going on, even on one thread. My test app takes some due to the speedy loop, and sql the rest. Which leads me to the conclusion that it won't scale very well. At least not on my tested hardware. Are there ways to optimize the database, say storing an array of int's per user instead of one record per item. But this makes it harder to remove items.

    Read the article

  • Trouble in ActiveX multi-thread invoke javascript callback routine

    - by code0tt
    everyone. I'm get some trouble in ActiveX programming with ATL. I try to make a activex which can async-download files from http server to local folder and after download it will invoke javascript callback function. My solution: run a thread M to monitor download thread D, when D is finish the job, M is going to terminal themself and invoke IDispatch inferface to call javascript function. **************** THERE IS MY CODE: **************** /* javascript code */ funciton download() { var xfm = new ActiveXObject("XFileMngr.FileManager.1"); xfm.download( 'http://somedomain/somefile','localdev:\\folder\localfile',function(msg){alert(msg);}); } /* C++ code */ // main routine STDMETHODIMP CFileManager::download(BSTR url, BSTR local, VARIANT scriptCallback) { CString csURL(url); CString csLocal(local); CAsyncDownload download; download.Download(this, csURL, csLocal, scriptCallback); return S_OK; } // parts of CAsyncDownload.h typedef struct tagThreadData { CAsyncDownload* pThis; } THREAD_DATA, *LPTHREAD_DATA; class CAsyncDownload : public IBindStatusCallback { private: LPUNKNOWN pcaller; CString csRemoteFile; CString csLocalFile; CComPtr<IDispatch> spCallback; public: void onDone(HRESULT hr); HRESULT Download(LPUNKNOWN caller, CString& csRemote, CString& csLocal, VARIANT callback); static DWORD __stdcall ThreadProc(void* param); }; // parts of CAsyncDownload.cpp void CAsyncDownload::onDone(HRESULT hr) { if(spCallback) { TRACE(TEXT("invoke callback function\n")); CComVariant vParams[1]; vParams[0] = "callback is working!"; DISPPARAMS params = { vParams, NULL, 1, 0 }; HRESULT hr = spCallback->Invoke(0, IID_NULL, LOCALE_USER_DEFAULT, DISPATCH_METHOD, &params, NULL, NULL, NULL); if(FAILED(hr)) { CString csBuffer; csBuffer.Format(TEXT("invoke failed, result value: %d \n"),hr); TRACE(csBuffer); }else { TRACE(TEXT("invoke was successful\n")); } } } HRESULT CAsyncDownload::Download(LPUNKNOWN caller, CString& csRemote, CString& csLocal, VARIANT callback) { CoInitializeEx(NULL, COINIT_MULTITHREADED); csRemoteFile = csRemote; csLocalFile = csLocal; pcaller = caller; switch(callback.vt){ case VT_DISPATCH: case VT_VARIANT:{ spCallback = callback.pdispVal; } break; default:{ spCallback = NULL; } } LPTHREAD_DATA pData = new THREAD_DATA; pData->pThis = this; // create monitor thread M HANDLE hThread = CreateThread(NULL, 0, ThreadProc, (void*)(pData), 0, NULL); if(!hThread) { delete pData; return HRESULT_FROM_WIN32(GetLastError()); } WaitForSingleObject(hThread, INFINITE); CloseHandle(hThread); CoUninitialize(); return S_OK; } DWORD __stdcall CAsyncDownload::ThreadProc(void* param) { LPTHREAD_DATA pData = (LPTHREAD_DATA)param; // here, we will create http download thread D // when download job is finish, call onDone method; pData->pThis->onDone(S_OK); delete pData; return 0; } **************** CODE FINISH **************** OK, above is parts of my source code, if I call onDone method in sub-thread, I will get OLE ERROR(-2147418113 (8000FFFF) Catastrophic failure.). Did I miss something? please help me to figure it out.

    Read the article

  • How to reclaim storage for deleted LOBs

    - by Jim Hudson
    I have a LOB tablespace. Currently holding 9GB out of 12GB available. And, as far as I can tell, deleting records doesn't reclaim any storage in the tablespace. I'm getting worried about handling further processing. This is Oracle 11.1 and the data are in a CLOB and a BLOB column in the same table. The LOB Index segments (SYS_IL...) are small, all the storage is in the data segments (SYS_LOB...) We'e tried purge and coalesce and didn't get anywhere -- same number of bytes in user_extents. "Alter table xxx move" will work, but we'd need to have someplace to move it to that has enough space for the revised data. We'd also need to do that off hours and rebuild the indexes, of course, but that's easy enough. Copying out the good data and doing a truncate, then copying it back, will also work. But that's pretty much just what the "alter table" command does. Am I missing some easy ways to shrink things down and get the storage back? Or is "alter table xxx move" the best approach? Or is this a non-issue and Oracle will grab back the space from the deleted lob rows when it needs it?

    Read the article

  • Windows Azure - access webrole local storage from separate workerrole

    - by Brett Smith
    I'm running an application on windows azure, the MVC views need to be dynamic, I started by storing them as records in the database, but am quite keen to move them to a physical location. My concept was to create the physical file via code... which worked great and speeds up the page load dramatically. This was of course before I realised that the files were only available for the duration of the role Next I looked at a start up task to create the files when the role was started - however I then realised that any separate instances weren't going to sync up unless I monitored the database for changes. So I moved from a start up task to a function in the run method of the role that checks the database every 10 minutes to see if changes have occurred. The problem is that this seems to choke up the application (at least in the warm up stage). Ideally I would like to move the run function to it's own worker role that can sit there and push files out to web role instances, but I'm unsure on how I would go about accessing the web roles local storage from the worker role. Can anybody tell me whether this is actually possible? and hopefully point me in the right direction to achieve this? Just to clarify what I'm trying to achieve -View is created in user interface running on web role and stored in database -Separate web role (front end) has clientside application with virtualpath provider pointing Views requests to local storage (localresource) -separate worker role to create View structure and load this into clientside web role local storage

    Read the article

  • Do all the HTML5 storage systems work together ?

    - by azera
    While there are a lot of good stuff about html5, one thing I don't get is the redondant storage mechanism, first there is localstorage and sessionstorage, which are key value stores, one is for one instance of the app ("one tab"), and the other works for all the instances of that application so they can share data. Both are saved when you close your browser and have a limited size (usually 5MB), that's great and everything would be nice if we stopped there. But then there is the "Web SQL Database", which has the same security system as the localstorage, the same size limit, the same everything except it works like/is sqlite, with tables and sql syntax and all of that. And the bummer is, they don't work on the same data at all ! This is not two way to access your data, this is really two storage for every html 5 app out there (not created by default yes, but still you see my point). What I would like to know is, is there a reason for both of this mechanisms to exist at the same time ? Or did they just look at sql and nosql movement to pick the best then went "screw it let's add both !" ? Why not implement local/session storage as a table inside web sql db ?

    Read the article

  • MySQL " identify storage engine statement"

    - by sammysmall
    This IS NOT a Homework question! While building my current student database project I realized that I may want to identify comprehensive information about a database design in the future. More-so if I am fortunate enough to get a job in this field and were handed a database project how could I break down certain elements for identification... In all of my previous designs I have been using MySQL Community Server (GPL) 5.1.42, I thought (duh) that I was using the MyISAM based on most of my text-book instruction and MySQL 5.0 Reference Manual :: 13 Storage Engines :: 13.1 The MyISAM Storage Engine I determined that this was in fact incorrect for this version and the use of "SHOW ENGINES" at the console... No problem, figured out why they have "versions" the need to pay attention to what version is being used, and the need for a means to determine what I am about to mess up "if" I do not pay attention to detail... Q1. Specifically what statement will identify the version used by someone elses initial database creation? (since I created my own databases I know what version I used) Q2. Specifically what statement will identify the storage engine that the developer used when creating the database. (I specified a particular database in my collection then tried SHOW Engine, did not work, then tried to just get the metadata from one table in that database: mysql SELECT duck_cust, table_type, engine - FROM INFORMATION_SCHEMA.tables - WHERE table_schema = 'tp' - ORDER BY table_type ASC, table_name DESC; as this was not really what I wanted (and did not work) I am looking for some direction from the pros... Q3. (If you really have the inclination to continue helping) If I were to access a database from an earlier/later "version" are there backward/forward compatibility issues for maintaining/updating data between versions? Please and Thank you in advance for your time and efforts! sammysmall

    Read the article

  • Convert a binary tree to linked list, breadth first, constant storage/destructive

    - by Merlyn Morgan-Graham
    This is not homework, and I don't need to answer it, but now I have become obsessed :) The problem is: Design an algorithm to destructively flatten a binary tree to a linked list, breadth-first. Okay, easy enough. Just build a queue, and do what you have to. That was the warm-up. Now, implement it with constant storage (recursion, if you can figure out an answer using it, is logarithmic storage, not constant). I found a solution to this problem on the Internet about a year back, but now I've forgotten it, and I want to know :) The trick, as far as I remember, involved using the tree to implement the queue, taking advantage of the destructive nature of the algorithm. When you are linking the list, you are also pushing an item into the queue. Each time I try to solve this, I lose nodes (such as each time I link the next node/add to the queue), I require extra storage, or I can't figure out the convoluted method I need to get back to a node that has the pointer I need. Even the link to that original article/post would be useful to me :) Google is giving me no joy. Edit: Jérémie pointed out that there is a fairly simple (and well known answer) if you have a parent pointer. While I now think he is correct about the original solution containing a parent pointer, I really wanted to solve the problem without it :) The refined requirements use this definition for the node: struct tree_node { int value; tree_node* left; tree_node* right; };

    Read the article

  • Ubuntu 10.04 recognizing USB 2.0 external HD as USB 1.1

    - by btucker
    When I connect the USB 2.0 drive I see this: usb 1-4.3: new full speed USB device using ohci_hcd and address 5 so I know it's getting seen as USB 1.1. usb-devices shows that it really is USB 2.0 and connected to a USB 2.0 hub: T: Bus=01 Lev=01 Prnt=01 Port=03 Cnt=01 Dev#= 2 Spd=12 MxCh= 4 D: Ver= 2.00 Cls=09(hub ) Sub=00 Prot=00 MxPS=64 #Cfgs= 1 P: Vendor=05e3 ProdID=0608 Rev=77.61 S: Product=USB2.0 Hub C: #Ifs= 1 Cfg#= 1 Atr=e0 MxPwr=100mA I: If#= 0 Alt= 0 #EPs= 1 Cls=09(hub ) Sub=00 Prot=00 Driver=hub T: Bus=01 Lev=02 Prnt=02 Port=01 Cnt=01 Dev#= 4 Spd=12 MxCh= 0 D: Ver= 2.00 Cls=00(>ifc ) Sub=00 Prot=00 MxPS=64 #Cfgs= 1 P: Vendor=13fd ProdID=1340 Rev=02.10 S: Manufacturer=Generic S: Product=External C: #Ifs= 1 Cfg#= 1 Atr=c0 MxPwr=2mA I: If#= 0 Alt= 0 #EPs= 2 Cls=08(stor.) Sub=06 Prot=50 Driver=usb-storage It seems the problem is that root hub is: T: Bus=01 Lev=00 Prnt=00 Port=00 Cnt=00 Dev#= 1 Spd=12 MxCh=10 D: Ver= 1.10 Cls=09(hub ) Sub=00 Prot=00 MxPS=64 #Cfgs= 1 P: Vendor=1d6b ProdID=0001 Rev=02.06 S: Manufacturer=Linux 2.6.32-25-server ohci_hcd S: Product=OHCI Host Controller S: SerialNumber=0000:00:02.0 C: #Ifs= 1 Cfg#= 1 Atr=e0 MxPwr=0mA I: If#= 0 Alt= 0 #EPs= 1 Cls=09(hub ) Sub=00 Prot=00 Driver=hub And there's no mention of ehci_hcd. lsusb -t gives me: /: Bus 01.Port 1: Dev 1, Class=root_hub, Driver=ohci_hcd/10p, 12M |__ Port 4: Dev 2, If 0, Class=hub, Driver=hub/4p, 12M |__ Port 2: Dev 4, If 0, Class=stor., Driver=usb-storage, 12M |__ Port 3: Dev 5, If 0, Class=stor., Driver=usb-storage, 12M |__ Port 6: Dev 3, If 0, Class=stor., Driver=usb-storage, 12M It seems like I'm missing something which would allow the OS to see USB 2.0 devices. Can anyone point me in the right direction? EDIT Full lsusb -v output: Bus 001 Device 005: ID 13fd:1340 Initio Corporation Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 2.00 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 idVendor 0x13fd Initio Corporation idProduct 0x1340 bcdDevice 2.10 iManufacturer 1 Generic iProduct 2 External iSerial 3 57442D574341595930323337 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 32 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xc0 Self Powered MaxPower 2mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 2 bInterfaceClass 8 Mass Storage bInterfaceSubClass 6 SCSI bInterfaceProtocol 80 Bulk (Zip) iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0040 1x 64 bytes bInterval 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x02 EP 2 OUT bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0040 1x 64 bytes bInterval 0 Device Qualifier (for other device speed): bLength 10 bDescriptorType 6 bcdUSB 2.00 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 bNumConfigurations 1 Device Status: 0x0001 Self Powered Bus 001 Device 002: ID 05e3:0608 Genesys Logic, Inc. USB-2.0 4-Port HUB Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 2.00 bDeviceClass 9 Hub bDeviceSubClass 0 Unused bDeviceProtocol 0 Full speed (or root) hub bMaxPacketSize0 64 idVendor 0x05e3 Genesys Logic, Inc. idProduct 0x0608 USB-2.0 4-Port HUB bcdDevice 77.61 iManufacturer 0 iProduct 1 USB2.0 Hub iSerial 0 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 25 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xe0 Self Powered Remote Wakeup MaxPower 100mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 1 bInterfaceClass 9 Hub bInterfaceSubClass 0 Unused bInterfaceProtocol 0 Full speed (or root) hub iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0001 1x 1 bytes bInterval 255 Hub Descriptor: bLength 9 bDescriptorType 41 nNbrPorts 4 wHubCharacteristic 0x00e0 Ganged power switching Ganged overcurrent protection Port indicators bPwrOn2PwrGood 50 * 2 milli seconds bHubContrCurrent 100 milli Ampere DeviceRemovable 0x00 PortPwrCtrlMask 0xff Hub Port Status: Port 1: 0000.0100 power Port 2: 0000.0103 power enable connect Port 3: 0000.0103 power enable connect Port 4: 0000.0100 power Device Qualifier (for other device speed): bLength 10 bDescriptorType 6 bcdUSB 2.00 bDeviceClass 9 Hub bDeviceSubClass 0 Unused bDeviceProtocol 1 Single TT bMaxPacketSize0 64 bNumConfigurations 1 Device Status: 0x0001 Self Powered Bus 001 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 1.10 bDeviceClass 9 Hub bDeviceSubClass 0 Unused bDeviceProtocol 0 Full speed (or root) hub bMaxPacketSize0 64 idVendor 0x1d6b Linux Foundation idProduct 0x0001 1.1 root hub bcdDevice 2.06 iManufacturer 3 Linux 2.6.32-25-server ohci_hcd iProduct 2 OHCI Host Controller iSerial 1 0000:00:02.0 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 25 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xe0 Self Powered Remote Wakeup MaxPower 0mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 1 bInterfaceClass 9 Hub bInterfaceSubClass 0 Unused bInterfaceProtocol 0 Full speed (or root) hub iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0002 1x 2 bytes bInterval 255 Hub Descriptor: bLength 11 bDescriptorType 41 nNbrPorts 10 wHubCharacteristic 0x0002 No power switching (usb 1.0) Ganged overcurrent protection bPwrOn2PwrGood 1 * 2 milli seconds bHubContrCurrent 0 milli Ampere DeviceRemovable 0x00 0x00 PortPwrCtrlMask 0xff 0xff Hub Port Status: Port 1: 0000.0100 power Port 2: 0000.0100 power Port 3: 0000.0100 power Port 4: 0000.0103 power enable connect Port 5: 0000.0100 power Port 6: 0000.0103 power enable connect Port 7: 0000.0100 power Port 8: 0000.0100 power Port 9: 0000.0100 power Port 10: 0000.0100 power Device Status: 0x0003 Self Powered Remote Wakeup Enabled

    Read the article

  • Varnish returning 503, FetchError (could not get storage)

    - by Archan
    On the current setup we're running into a problem with Varnish, we're running a CentOS 5.7 x86_64 xenpv, with Cpanel WHM, hosted at VPS.net. Sometimes we will recieve a Guru Meditation from Varnish, and when we look in the varnishlog with the following command varnishlog -d -c -m TxStatus:503 it returns output similar to the following: 15 VCL_call c recv 15 VCL_acl c NO_MATCH devs 15 VCL_return c pass 15 VCL_call c hash 15 Hash c **** 15 Hash c ************* 15 VCL_return c hash 15 VCL_call c pass pass 15 Backend c 12 default default 15 TTL c 1835862523 RFC 0 -1 -1 1332454056 0 1332454055 375007920 0 15 VCL_call c fetch hit_for_pass 15 ObjProtocol c HTTP/1.1 15 ObjResponse c OK 15 ObjHeader c Date: Thu, 22 Mar 2012 22:07:35 GMT 15 ObjHeader c Server: Apache/2.2.21 (Unix) mod_ssl/2.2.21 OpenSSL/0.9.8e-fips-rhel5 mod_bwlimited/1.4 mod_fcgid/2.3.6 15 ObjHeader c X-Powered-By: PHP/5.3.9 15 ObjHeader c Expires: Thu, 19 Nov 1981 08:52:00 GMT 15 ObjHeader c Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 15 ObjHeader c Pragma: no-cache 15 ObjHeader c Content-Type: text/html; charset=utf-8 15 ObjHeader c X-Cacheable: NO:Cache-Control=private 15 FetchError c chunked read_error: 12 (Could not get storage) 15 VCL_call c error deliver 15 VCL_call c deliver deliver As far as I have could gather, we could try increasing the nuke_limit, but currently we have a nuke_limit of 500, and when running varnishstat -1 -f n_lru_nuked we "only" get a total of 1031, even though we have seen the error happen on several pages. When we then run top to see how much memory Varnish is using, it only shows that it is using 763m, although we've set it to be allowed to use 1200m. Any ideas of what the problem can be?

    Read the article

  • Need a recommendation for shared storage on auto-scaling ec2 w/ scalr

    - by john h.
    I have come across so many answers to this question that I am completely lost! I am moving our 2 sites to a load balanced ec2 system with scalr as our cloud manager. Now the question is coming up about persistent storage for the user's uploaded content and other files. Could someone please give me a suggestion and possible a link to a tutorial for the following setup and goals. 2 websites (1 Forum, 1 ecommerce). 1 LB 1 App server (to scale out to as many as needed) 1 DB server (to scale out to as many as needed) Our sites will need to autoscale and according to what I am learning about scalr, that means as new instances load up, I need to run a script to set the basics up on that server (git,php mods, pull site from git, move keys, etc) What I don't understand is how should I handle user uploaded content like profile pictures, avatars, product images, themes, etc... Do I mount an EBS or s3fs folder to hold the websites (maybe /var/www/websitefolder) or do I do something like mount the avatar folders /var/www/websitefolder/images/avatars) I am not sure where to go with this. Could someone give me some detailed help? -John

    Read the article

  • GlusterFS as elastic file storage?

    - by Christopher Vanderlinden
    Is there any way to run GlusterFS in a replicated mode, but with the ability to dynamically scale the volume up and down? Say you have 3 servers all running glusterd. your Gluster volume would have to be setup with replica 3 gluster volume create test-volume replica 3 192.168.0.150:/test-volume 192.168.0.151:/test-volume 192.168.0.152:/test-volume You would then mount it as say \mnt\gfs_test What happens when I want to add 2 more servers to the storage pool and then also use them in this volume? Is there any easy way to expand the volume AND increase that replica count to 5? My end goal is to run this on EC2 instances, say 3 Apache front ends, with the webroot setup on the gluster volume mount. My concern is that if I ever need to spin up a server, I would want the server to not only be an additional Apache front end, but also another server in the gluster file system, adding to fault tolerance as well as possibly giving a slight boost in read speed. Maybe there are better options that would fit the bill here? Thanks.

    Read the article

  • Turn computer into DAS (Direct Attached Storage)

    - by Damon
    Can we build a direct attached storage by taking a computer/server, adding an HBA, and installing some appropriate software? We would use Debian as a host OS for both the DAS and the server. If so, what software do we use? And do we simply need a HBA for the DAS and the Server? Or do we need more hardware? The goal is to use an older server that does not have enough room for drives but does have ECC memory, server processors, redundant power supplies, dual nics, etc. Then find any boxes, server or not, the key being having enough room for 8-12 drives, fans, etc. and turning them into a DAS; build two of these DAS's and have them connected to the server. Eventually we want to have two servers using DRBD and associated services like heartbeat and pace maker to create an HA setup for our server(s) but that will take a long time to configure since I have no experience with anything related to DRBD (yet) and have a learning curve I have to get past, not to mention the additional cost of more hardware (two servers vs one).

    Read the article

  • Need a place to store a few bytes of meta information on storage media

    - by Jason C
    I'm working on an embedded project. I need a place to store some filesystem-independent meta information on a storage device. The device has an MSDOS partition table. The device also may have unallocated space (depending on its size) but it will be TRIMmed (and also may be blown away by new partitions in the future). I need a location on the device that is not unallocated and that has a low risk of being touched (outside of completely erasing the device). The device is only guaranteed to have an MBR at the point the meta data needs to first be written; meaning there are no EBRs/VBRs present that I could use. There are 446 bytes at the very start of the device available for MBR bootstrap code. Currently my only idea is to store data at the end of this block. However, the device is bootable and I have no way of knowing if I'd be blowing away bootstrap code or not. The sector size is 512 bytes and the MBR is the first sector, I'm pretty sure (correct me if I'm wrong) that that means the second sector is available for use by partition data, so I can't use that either. Does anybody have any ideas? I need 4 bytes of space.

    Read the article

  • Industrial strength cloud file storage

    - by ArthurG
    I'm looking for an industrial strength cloud file storage system. It will be used by multiple people in a startup. Our requirements: Transparent file system access: files and folders in the file system must be able transparently access (read and write) files in the cloud; files must be synchronized whenever network access is available and buffered otherwise. The system must be usable by non-technical people. Access control: we need to control who can access which files, at least on a very coarse basis. e.g., the developers will be able to access the system design documents, only the corporate folks can access recruiting documents, and only management can access certain corporate documents. Dropbox provides this via Sharing folders, but that's not adequate, if I understand it correctly, because there's no authentication of the sharing user. so the cloud service should have a notion of an account (our startup) with multiple users with distinct credentials and rights for each user Clients: it must be accessible from Macs and PCs; I would hope that it supports Linux (e.g., Ubuntu) too Security: it must provide robust security Backup: the cloud service must reliably backup the files Versioning: change version history, is a big plus, but not required Not free: we're willing to pay for the service So far, we've reviewed the following, albeit not completely thoroughly: Dropbox: has all except 1) Access control, which is provided via Sharing folders, but that's not adequate, if I understand it correctly, because there's no authentication of the sharing user. and 2) Security, as discussed here http://www.economist.com/blogs/babbage/2011/05/internet_security and here http://blog.dropbox.com/?p=821. Windows Live Mesh, has all except 1) Clients, only supporting Windows 7 and OS X. SpiderOak has all, except 1) Transparent file system access, which is only available for 1 user. Amazon Cloud, doesn't offer 1) Transparent file system access Rackspace Cloud Drive has all except 1) Access control and 2) Versioning I'll gladly include any clarifications or additional systems the community provides. Arthur

    Read the article

  • centos iptables, restrict tcp port to specific ips

    - by user788171
    I would like to modify the iptables on my CentOS 5.8 server so that only specific ips can connect to the machine on a specific port. Currently, I have the following in my iptables file: -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 5000 -j ACCEPT How would I modify that line if I wanted to allow access for only ips 1.1.1.1 and 1.1.1.2 for instance? (they might not necessarily be sequential ips when I do this for reals).

    Read the article

  • Strange behavior when changing default shell, and setting explorer.exe as winlogon shell for specific user

    - by Ophir Yoktan
    I use a custom logon shell on a machine (windows 7) for security reasons - which works fine by altering HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon\shell. However, I also want that the administrator will still be able to manage the machine, so I modified the user specific key: HKEY_CURRENT_USER\Software\Microsoft\Windows NT\CurrentVersion\Winlogon\shell back to explorer.exe - but when I log on I get a single windows explorer window, and not the full desktop. does any one know how to configure the normal desktop shell only to a specific user?

    Read the article

  • Triggering a server action on a specific event

    - by Oli
    I would like to launch a specific command [1] on a mail server whenever a message is moved from one folder into another folder. For example, a Thunderbird user moves a message from folder A to Folder B. I'd like to catch this move and launch a specific script on the server. Is it possible ? I'm using qmail with courier-imap. [1] bash or python script, ...

    Read the article

  • Exchange 2003 Delete Specific Emails

    - by nonpoly
    Over the weekend our Exchange server was blasted with emails. Using recipient policies in the mailbox manager, how do I remove emails that are in the inbox, but coming from a specific sender (or maybe containing a specific subject?). Perhaps someone has some suggestions for another route to take aside from recipient policies, but that will affectively achieve the same end goal? Any help is much appreciated.

    Read the article

  • Require a specific email header field with postfix

    - by Stefan Amyotte
    I want to setup postfix so that email lacking a specific email header are rejected. Is it possible to use header_check to reject emails that do not include a specific header field entry. The solution that I believe may work is the following: /^x-tituslabs-classifications-30: (<>)?$/ REJECT Classification field required I want to make sure that any email going through postfix contains a x-tituslabs-classifications-30 entry.

    Read the article

  • Deny login from certain hosts if logging in with specific sql credentials

    - by Dave
    I want to stop some of our developers from connecting to the production sql server using a specific sql account. They have rights to connect through windows authentication with lower rights. They claim that changing the password will affect too many other processes running on our processing machine. So I want to deny access if they're connecting from there dev machines for now. Another way this would work is if I could just allow connections from one specific host.

    Read the article

  • Active Directory - Join domain in specific OU when its a workstation

    - by Jonathan Rioux
    I would like to know how can I accomplish the following: When I join a workstation (with Windows 7) in the domain, I want that computer to be put into a specific OU. Only when its a workstation with Windows 7. This is because I have GPOs that must apply to all workstations in the domain. Can I only accomplish this using a script? Or can I set a rule like if the computers has Windows 7, put that computer into this specific OU.

    Read the article

< Previous Page | 56 57 58 59 60 61 62 63 64 65 66 67  | Next Page >