Search Results

Search found 4346 results on 174 pages for 'shop direct'.

Page 20/174 | < Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >

  • Restructuring Site - SEO

    - by a1anm
    I am planning to restructure my site slightly which means certain urls will be changing. I rank quite well for some of these pages in google. What can I do to retain this once I change the url's? Here is an example of some of the changes: twistedtime.com/mens-watches.html to twistedtime.com/shop-by/gender/mens-watches.html twistedtime.com/watch-brands/lip-watches.html to twistedtime.com/shop-by/watch-brands/lip-watches.html

    Read the article

  • nsmutablearray and saving to file

    - by Amir
    hello all, I have class named Shop that contain data members (NSString , NSInteger and nsmutablearray that contain another class(that have also NSString and NSInteger) Now if i use nsmutablearray to hold alist of Shops what is the best way to save the list to file and load it later? again the class Shop contain data memeber that is another class both of the classes have NSString and NSinteger (maybe also NSdata and NSdate) i heard somthing about archiver?? thanks.

    Read the article

  • Loading the Cache from the Business Application Server

    - by ACShorten
    By default, the Web Application server will directly connect to the Database to load its cache at startup time. Customers, who implement the product installation in distributed mode, where the Web Application Server and Business Application Server are deployed separately, may wish to prevent the Web Application Server to connect to the database directly. Installation of the product in distributed mode was introduced in Oracle Utilities Application Framework V2.2. In the Advanced Web Application Server configuration, it is possible to set the Create Simple Web Application Context (WEBAPPCONTEXT) to true to force the Web Application Server to load its cache via the Business Application rather than direct loading. The value of false will retain the default behavior of allowing the Web Application Server to connect directly to the database at startup time to load the cache. The value of true will load the cache data via direct calls to the Business Application Server, which can cause a slight delay in the startup process to cater for the architecture load rather than the direct load. The impact of the settings is illustrated in the figure below:                             When setting this value to true, the following properties files should be manually removed prior to executing the product: $SPLEBASE/etc/conf/root/WEB-INF/classes/hibernate.properties $SPLEBASE/splapp/applications/root/WEB-INF/classes/hibernate.properties Note: For customers who are using a local installation, where the Web Application Server and Business Application Server are combined in the deployed server, it is recommended to set this parameter to false, the default, unless otherwise required. This facility is available for Oracle Utilities Application Framework V4.1 in Group Fix 3 (via Patch 11900153) and Patch 13538242 available from My Oracle Support.

    Read the article

  • PCI compliance when using third-party processing

    - by Moses
    My company is outsourcing the development of our new e-commerce site to a third party web development company. The way they set up our site to handle transactions is by having the user enter the necessary payment info, then passing that data to a third party merchant that processes the payment, then completing the transaction if everything is good. When the issue of PCI/DSS compliance was raised, they said: You wont need PCI certification because the clients browser will send the sensitive information directly to the third party merchant when the transaction is processed. However, the process will be transparent to the user because all interface and displays are controlled by us. The only server required to be compliant is the third party merchant's because no sensitive card data ever touches your server or web app. Even though I very much so trust and respect the knowledge of our web developers, what they are saying is raising some serious red flags for me. The way the site is described, I am sure we will not be using a hosted payment page like PayPal or Google Checkout offers (how could we maintain control over UI if we were?) And while my knowledge of e-commerce is laughable at best, it seems like the only other option for us would be to use XML direct to communicate with our third party merchant for processing. My two questions are as follows: Based off everything you've read, is "XML Direct" the only option they could conceivably be using, or is there another method I don't know of which they could be implementing? Most importantly, is it true our site does not need PCI certification? As I understand it, using the XML direct method means that we do have to be PCI/DSS certified, and the only way around getting certified is through a payment hosted page (i.e. PayPal).

    Read the article

  • Squid refresh_pattern won't cache "Expires: ..."

    - by Marcelo Cantos
    Background I frequent the OpenGL ES documentation site at http://www.khronos.org/opengles/sdk/1.1/docs/man/. Even though the content is completely static, it seems to force a reload on every single page I visit, which is very annoying. I have a squid 3.0 proxy set up (apt-get install squid3 on Ubuntu 10.04), and I added a refresh_pattern to force the pages to cache: refresh_pattern ^http://www.khronos.org/opengles/sdk/1\.1/docs/man/ … 1440 20% 10080 … override-expire ignore-reload ignore-no-cache ignore-private ignore-no-store This is all on one line, of course. While this appears to work for the XHTML documents (e.g., glBindTexture), it fails to cache the linked content, such as the DTD, some .ent files (?) and some XSL files. The delay in fetching these extra files delays rendering of the main document, so my principal annoyance isn't fixed. The only difference I can glean with these ancillary files is that they come with an Expires: header set to the current time, whereas the XHTML document has none. But I would have expected the override-expire option to fix this. I have confirmed that documents have the same base URL. I have also truncated the pattern to varying degrees, with no effect. My questions Why does the override-expire option not seem to work? Is there a simple way to tell squid to unconditionally cache a document, no matter what it finds in the response headers? (Hopefully) relevant output cache.log Jan 01 10:33:30 1970/06/25 21:18:27| Processing Configuration File: /etc/squid3/squid.conf (depth 0) Jan 01 10:33:30 1970/06/25 21:18:27| WARNING: use of 'override-expire' in 'refresh_pattern' violates HTTP Jan 01 10:33:30 1970/06/25 21:18:27| WARNING: use of 'ignore-reload' in 'refresh_pattern' violates HTTP Jan 01 10:33:30 1970/06/25 21:18:27| WARNING: use of 'ignore-no-cache' in 'refresh_pattern' violates HTTP Jan 01 10:33:30 1970/06/25 21:18:27| WARNING: use of 'ignore-no-store' in 'refresh_pattern' violates HTTP Jan 01 10:33:30 1970/06/25 21:18:27| WARNING: use of 'ignore-private' in 'refresh_pattern' violates HTTP Jan 01 10:33:30 1970/06/25 21:18:27| DNS Socket created at 0.0.0.0, port 37082, FD 10 Jan 01 10:33:30 1970/06/25 21:18:27| Adding nameserver 192.168.1.1 from /etc/resolv.conf Jan 01 10:33:30 1970/06/25 21:18:27| Accepting HTTP connections at 0.0.0.0, port 3128, FD 11. Jan 01 10:33:30 1970/06/25 21:18:27| Accepting ICP messages at 0.0.0.0, port 3130, FD 13. Jan 01 10:33:30 1970/06/25 21:18:27| HTCP Disabled. Jan 01 10:33:30 1970/06/25 21:18:27| Loaded Icons. Jan 01 10:33:30 1970/06/25 21:18:27| Ready to serve requests. access.log Jun 25 21:19:35 2010.710 0 192.168.1.50 TCP_MEM_HIT/200 2452 GET http://www.khronos.org/opengles/sdk/1.1/docs/man/glBindTexture.xml - NONE/- text/xml Jun 25 21:19:36 2010.263 543 192.168.1.50 TCP_MISS/304 322 GET http://www.khronos.org/opengles/sdk/1.1/docs/man/xhtml1-transitional.dtd - DIRECT/74.54.224.215 - Jun 25 21:19:36 2010.276 556 192.168.1.50 TCP_MISS/304 370 GET http://www.khronos.org/opengles/sdk/1.1/docs/man/mathml.xsl - DIRECT/74.54.224.215 - Jun 25 21:19:36 2010.666 278 192.168.1.50 TCP_MISS/304 322 GET http://www.khronos.org/opengles/sdk/1.1/docs/man/xhtml-lat1.ent - DIRECT/74.54.224.215 - Jun 25 21:19:36 2010.958 279 192.168.1.50 TCP_MISS/304 322 GET http://www.khronos.org/opengles/sdk/1.1/docs/man/xhtml-symbol.ent - DIRECT/74.54.224.215 - Jun 25 21:19:37 2010.251 276 192.168.1.50 TCP_MISS/304 322 GET http://www.khronos.org/opengles/sdk/1.1/docs/man/xhtml-special.ent - DIRECT/74.54.224.215 - Jun 25 21:19:37 2010.332 0 192.168.1.50 TCP_IMS_HIT/304 316 GET http://www.khronos.org/opengles/sdk/1.1/docs/man/ctop.xsl - NONE/- text/xml Jun 25 21:19:37 2010.332 0 192.168.1.50 TCP_IMS_HIT/304 316 GET http://www.khronos.org/opengles/sdk/1.1/docs/man/pmathml.xsl - NONE/- text/xml store.log Jun 25 21:19:36 2010.263 RELEASE -1 FFFFFFFF D3056C09B42659631A65A08F97794E45 304 1277464776 -1 1277464776 unknown -1/0 GET http://www.khronos.org/opengles/sdk/1.1/docs/man/xhtml1-transitional.dtd Jun 25 21:19:36 2010.276 RELEASE -1 FFFFFFFF 9BF7F37442FD84DD0AC0479E38329E3C 304 1277464776 -1 1277464776 unknown -1/0 GET http://www.khronos.org/opengles/sdk/1.1/docs/man/mathml.xsl Jun 25 21:19:36 2010.666 RELEASE -1 FFFFFFFF 7BCFCE88EC91578C8E2589CB6310B3A1 304 1277464776 -1 1277464776 unknown -1/0 GET http://www.khronos.org/opengles/sdk/1.1/docs/man/xhtml-lat1.ent Jun 25 21:19:36 2010.958 RELEASE -1 FFFFFFFF ECF1B24E437CFAA08A2785AA31A042A0 304 1277464777 -1 1277464777 unknown -1/0 GET http://www.khronos.org/opengles/sdk/1.1/docs/man/xhtml-symbol.ent Jun 25 21:19:37 2010.251 RELEASE -1 FFFFFFFF 36FE3D76C80F0106E6E9F3B7DCE924FA 304 1277464777 -1 1277464777 unknown -1/0 GET http://www.khronos.org/opengles/sdk/1.1/docs/man/xhtml-special.ent Jun 25 21:19:37 2010.332 RELEASE -1 FFFFFFFF A33E5A5CCA2BFA059C0FA25163485192 304 1277462871 1221139523 1277462871 text/xml -1/0 GET http://www.khronos.org/opengles/sdk/1.1/docs/man/ctop.xsl Jun 25 21:19:37 2010.332 RELEASE -1 FFFFFFFF E2CF8854443275755915346052ACE14E 304 1277462872 1221139523 1277462872 text/xml -1/0 GET http://www.khronos.org/opengles/sdk/1.1/docs/man/pmathml.xsl

    Read the article

  • Strange performance behaviour

    - by plastilino
    I'm puzzled with this. In my machine Direct calculation: 375 ms Method calculation: 3594 ms, about TEN times SLOWER If I place the method calulation BEFORE the direct calculation, both times are SIMILAR. Woud you check it in your machine? class Test { static long COUNT = 50000 * 10000; private static long BEFORE; /*--------METHOD---------*/ public static final double hypotenuse(double a, double b) { return Math.sqrt(a * a + b * b); } /*--------TIMER---------*/ public static void getTime(String text) { if (BEFORE == 0) { BEFORE = System.currentTimeMillis(); return; } long now = System.currentTimeMillis(); long elapsed = (now - BEFORE); BEFORE = System.currentTimeMillis(); if (text.equals("")) { return; } String message = "\r\n" + text + "\r\n" + "Elapsed time: " + elapsed + " ms"; System.out.println(message); } public static void main(String[] args) { double a = 0.2223221101; double b = 122333.167; getTime(""); /*--------DIRECT CALCULATION---------*/ for (int i = 1; i < COUNT; i++) { Math.sqrt(a * a + b * b); } getTime("Direct: "); /*--------METHOD---------*/ for (int k = 1; k < COUNT; k++) { hypotenuse(a, b); } getTime("Method: "); } }

    Read the article

  • Reading a directory

    - by paleman
    Hi, I'm trying to solve exercise from K&R, it's about reading directories.This task is system dependent because it uses system calls.In the book example authors say that their example is written for Version 7 and System V UNIX systems and that they used the directory information in the header < sys/dir.h,which looks like this: #ifndef DIRSIZ #define DIRSIZ 14 #endif struct direct { /* directory entry */ ino_t d_ino; /* inode number */ char d_name[DIRSIZ]; /* long name does not have '\0' */ }; On this system they use 'struct direct' combined with 'read' function to retrieve a directory entry, which consist of file name and inode number. ..... struct direct dirbuf; /* local directory structure */ while(read(dp->fd, (char *) &dirbuf, sizeof(dirbuf) == sizeof(dirbuf) { ..... } ..... I suppose this works fine on UNIX and Linux systems, but what I want to do is modify this so it works on Windows XP. Is there some structure in Windows like 'struct direct' so I can use it with 'read' function and if there is what is the header name where it is defined? Or maybe Windows requires completely different approach?

    Read the article

  • Best Practices - which domain types should be used to run applications

    - by jsavit
    This post is one of a series of "best practices" notes for Oracle VM Server for SPARC (formerly named Logical Domains) One question that frequently comes up is "which types of domain should I use to run applications?" There used to be a simple answer in most cases: "only run applications in guest domains", but enhancements to T-series servers, Oracle VM Server for SPARC and the advent of SPARC SuperCluster have made this question more interesting and worth qualifying differently. This article reviews the relevant concepts and provides suggestions on where to deploy applications in a logical domains environment. Review: division of labor and types of domain Oracle VM Server for SPARC offloads many functions from the hypervisor to domains (also called virtual machines). This is a modern alternative to using a "thick" hypervisor that provides all virtualization functions, as in traditional VM designs, This permits a simpler hypervisor design, which enhances reliability, and security. It also reduces single points of failure by assigning responsibilities to multiple system components, which further improves reliability and security. In this architecture, management and I/O functionality are provided within domains. Oracle VM Server for SPARC does this by defining the following types of domain, each with their own roles: Control domain - management control point for the server, used to configure domains and manage resources. It is the first domain to boot on a power-up, is an I/O domain, and is usually a service domain as well. I/O domain - has been assigned physical I/O devices: a PCIe root complex, a PCI device, or a SR-IOV (single-root I/O Virtualization) function. It has native performance and functionality for the devices it owns, unmediated by any virtualization layer. Service domain - provides virtual network and disk devices to guest domains. Guest domain - a domain whose devices are all virtual rather than physical: virtual network and disk devices provided by one or more service domains. In common practice, this is where applications are run. Typical deployment A service domain is generally also an I/O domain: otherwise it wouldn't have access to physical device "backends" to offer to its clients. Similarly, an I/O domain is also typically a service domain in order to leverage the available PCI busses. Control domains must be I/O domains, because they boot up first on the server and require physical I/O. It's typical for the control domain to also be a service domain too so it doesn't "waste" the I/O resources it uses. A simple configuration consists of a control domain, which is also the one I/O and service domain, and some number of guest domains using virtual I/O. In production, customers typically use multiple domains with I/O and service roles to eliminate single points of failure: guest domains have virtual disk and virtual devices provisioned from more than one service domain, so failure of a service domain or I/O path or device doesn't result in an application outage. This is also used for "rolling upgrades" in which service domains are upgraded one at a time while their guests continue to operate without disruption. (It should be noted that resiliency to I/O device failures can also be provided by the single control domain, using multi-path I/O) In this type of deployment, control, I/O, and service domains are used for virtualization infrastructure, while applications run in guest domains. Changing application deployment patterns The above model has been widely and successfully used, but more configuration options are available now. Servers got bigger than the original T2000 class machines with 2 I/O busses, so there is more I/O capacity that can be used for applications. Increased T-series server capacity made it attractive to run more vertical applications, such as databases, with higher resource requirements than the "light" applications originally seen. This made it attractive to run applications in I/O domains so they could get bare-metal native I/O performance. This is leveraged by the SPARC SuperCluster engineered system, announced a year ago at Oracle OpenWorld. In SPARC SuperCluster, I/O domains are used for high performance applications, with native I/O performance for disk and network and optimized access to the Infiniband fabric. Another technical enhancement is the introduction of Direct I/O (DIO) and Single Root I/O Virtualization (SR-IOV), which make it possible to give domains direct connections and native I/O performance for selected I/O devices. A domain with either a DIO or SR-IOV device is an I/O domain. In summary: not all I/O domains own PCI complexes, and there are increasingly more I/O domains that are not service domains. They use their I/O connectivity for performance for their own applications. However, there are some limitations and considerations: at this time, a domain using physical I/O cannot be live-migrated to another server. There is also a need to plan for security and introducing unneeded dependencies: if an I/O domain is also a service domain providing virtual I/O go guests, it has the ability to affect the correct operation of its client guest domains. This is even more relevant for the control domain. where the ldm has to be protected from unauthorized (or even mistaken) use that would affect other domains. As a general rule, running applications in the service domain or the control domain should be avoided. To recap: Guest domains with virtual I/O still provide the greatest operational flexibility, including features like live migration. I/O domains can be used for applications with high performance requirements. This is used to great effect in SPARC SuperCluster and in general T4 deployments. Direct I/O (DIO) and Single Root I/O Virtualization (SR-IOV) make this more attractive by giving direct I/O access to more domains. Service domains should in general not be used for applications, because compromised security in the domain, or an outage, can affect other domains that depend on it. This concern can be mitigated by providing guests' their virtual I/O from more than one service domain, so an interruption of service in the service domain does not cause an application outage. The control domain should in general not be used to run applications, for the same reason. SPARC SuperCluster use the control domain for applications, but it is an exception: it's not a general purpose environment; it's an engineered system with specifically configured applications and optimization for optimal performance. These are recommended "best practices" based on conversations with a number of Oracle architects. Keep in mind that "one size does not fit all", so you should evaluate these practices in the context of your own requirements. Summary Higher capacity T-series servers have made it more attractive to use them for applications with high resource requirements. New deployment models permit native I/O performance for demanding applications by running them in I/O domains with direct access to their devices. This is leveraged in SPARC SuperCluster, and can be leveraged in T-series servers to provision high-performance applications running in domains. Carefully planned, this can be used to provide higher performance for critical applications.

    Read the article

  • How to switch users in a smooth way in a Point-Of-Sale system?

    - by Sanoj
    I am designing a Point-Of-Sale system for a small shop. The shop just have one Point-Of-Sale but often they are one to three users (sellers) in the shop. Each user have their own user account in the system so they login and logout very often. How should I design the login/logout system in a good way? For the moment the users don't use passwords, because it takes so long time to type the password each time they login. The Platform is Windows Vista but I would like to support Windows 7 too. We use Active Directory on the Network. The system is developed in Java/Swing for the moment, but I'm thinking about to change to C#.NET/WPF. I am thinking about an SmartCard solution, but I don't know if that fits my situation. It would be more secure (which I like) but I don't know if it will be easy to implement and smooth to use, i.e. can I have the POS-system running in the background or started very quickly when the users switch? Are SmartCard solutions very expensive? (My customers are small shops) Is it preferred to use .NET or Java in a SmartCard solution? What other solutions do I have other than passwords/no passwords/smartcards? How should I design the login/logout system in a good way? Is there any good solution using SmartCards for this purpose? I would like suggested solutions both for C#.NET/WPF and Java/Swing platforms. I would like suggested solutions both for Active Directory solutions and solutions that only use one user profile in Windows. How is this problem solved in similar products? I have only seen password-solutions, but they are clumsy.

    Read the article

  • django powering multiple shops from one code base on a single domain

    - by imanc
    Hey, I am new to django and python and am trying to figure out how to modify an existing app to run multiple shops through a single domain. Django's sites middleware seems inappropriate in this particular case because it manages different domains, not sites run through the same domain, e.g. : domain.com/uk domain.com/us domain.com/es etc. Each site will need translated content - and minor template changes. The solution needs to be flexible enough to allow for easy modification of templates. The forms will also need to vary a bit, e.g minor variances in fields and validation for each country specific shop. I am thinking along the lines of the following as a solution and would love some feedback from experienced django-ers: In short: same codebase, but separate country specific urls files, separate templates and separate database Create a middleware class that does IP localisation, determines the country based on the URL and creates a database connection, e.g. /au/ will point to the au specific database and so on. in root urls.py have routes that point to a separate country specific routing file, e..g (r'^au/',include('urls_au')), (r'^es/',include('urls_es')), use a single template directory but in that directory have a localised directory structure, e.g. /base.html and /uk/base.html and write a custom template loader that looks for local templates first. (or have a separate directory for each shop and set the template directory path in middleware) use the django internationalisation to manage translation strings throughout slight variances in forms and models (e.g. ZA has an ID field, France has 'door code' and 'floor' etc.) I am unsure how to handle these variations but I suspect the tables will contain all fields but allowing nulls and the model will have all fields but allowing nulls. The forms will to be modified slightly for each shop. Anyway, I am keen to get feedback on the best way to go about achieving this multi site solution. It seems like it would work, but feels a bit "hackish" and I wonder if there's a more elegant way of getting this solution to work. Thanks, imanc

    Read the article

  • Trouble with piping through sed

    - by Joel
    I am having trouble piping through sed. Once I have piped output to sed, I cannot pipe the output of sed elsewhere. wget -r -nv http://127.0.0.1:3000/test.html Outputs: 2010-03-12 04:41:48 URL:http://127.0.0.1:3000/test.html [99/99] -> "127.0.0.1:3000/test.html" [1] 2010-03-12 04:41:48 URL:http://127.0.0.1:3000/robots.txt [83/83] -> "127.0.0.1:3000/robots.txt" [1] 2010-03-12 04:41:48 URL:http://127.0.0.1:3000/shop [22818/22818] -> "127.0.0.1:3000/shop.29" [1] I pipe the output through sed to get a clean list of URLs: wget -r -nv http://127.0.0.1:3000/test.html 2>&1 | grep --line-buffered -v ERROR | sed 's/^.*URL:\([^ ]*\).*/\1/g' Outputs: http://127.0.0.1:3000/test.html http://127.0.0.1:3000/robots.txt http://127.0.0.1:3000/shop I would like to then dump the output to file, so I do this: wget -r -nv http://127.0.0.1:3000/test.html 2>&1 | grep --line-buffered -v ERROR | sed 's/^.*URL:\([^ ]*\).*/\1/g' > /tmp/DUMP_FILE I interrupt the process after a few seconds and check the file, yet it is empty. Interesting, the following yields no output (same as above, but piping sed output through cat): wget -r -nv http://127.0.0.1:3000/test.html 2>&1 | grep --line-buffered -v ERROR | sed 's/^.*URL:\([^ ]*\).*/\1/g' | cat Why can I not pipe the output of sed to another program like cat?

    Read the article

  • MYSQL - SELECT ALL FROM TABLE if...

    - by hornetbzz
    Hello I have a (nice) mysql table built like this : Fields Datas id (pk) 1 2 3 4 5 6 master_id 1000 1000 1000 2000 2000 2000 ... master_name home home home shop shop shop ... type_data value common client value common client ... param_a foo_a 1 0 bar_a 0 1 ... param_b foo_b 1 0 bar_b 1 0 ... param_c foo_c 0 1 bar_c 0 1 ... ... ... ... ... ... ... ... ... All these datas are embed in a single table. Each datas are dispatched on 3 "columns" set (1 for the values, 1 for identifying if these are common values and one for identifying client values). It's not the best I got but many other scripts depends on this structure. I'd need sthg like this: SELECT parameters name (eg param_a, param_b..) and their values (eg foo_a, foo_b..) WHEN master_id=? AND type_data=(common or client) (eg for values=1 on the 2nd column) . in order to get the parameters hash like param_a => foo_a param_b => foo_b param_c => foo_c ... I could not succeed in self joining on the same table till now but I guess it should be feasible. (I'd like to avoid to do several queries) Thx in advance

    Read the article

  • Should a developer write their own test plan for Q/A?

    - by Mat Nadrofsky
    Who writes the test plans in your shop? Who should write them? I realize developers (like me) regularly do their own unit testing whilst developing and in some cases even their own Q/A depending on the size of the shop and the nature of the business, but in a big software shop with a full development team and Q/A team, who should be writing those official "my changes are done now" test plans? Soon, we'll be bringing on another Q/A member to our development team. My question is, going forward, is it a good practice to get your developers to write their own test plans? Something tells me that part of that might make sense but another part might not... What I like about that: Developer is very familiar with the changes made, thus it's easy to produce a document... What I don't like about that: Developer knows how it's supposed to work and might write a test plan that caters to this without knowing it. So, with the above in mind, what is the general stance on this topic? I'm of course already reading books like the Mythical Man-Month, Code Complete and a few others which really do help, but I'd like to get some input from the group as well.

    Read the article

  • Why do I see a large performance hit with DRBD?

    - by BHS
    I see a much larger performance hit with DRBD than their user manual says I should get. I'm using DRBD 8.3.7 (Fedora 13 RPMs). I've setup a DRBD test and measured throughput of disk and network without DRBD: dd if=/dev/zero of=/data.tmp bs=512M count=1 oflag=direct 536870912 bytes (537 MB) copied, 4.62985 s, 116 MB/s / is a logical volume on the disk I'm testing with, mounted without DRBD iperf: [ 4] 0.0-10.0 sec 1.10 GBytes 941 Mbits/sec According to Throughput overhead expectations, the bottleneck would be whichever is slower, the network or the disk and DRBD should have an overhead of 3%. In my case network and I/O seem to be pretty evenly matched. It sounds like I should be able to get around 100 MB/s. So, with the raw drbd device, I get dd if=/dev/zero of=/dev/drbd2 bs=512M count=1 oflag=direct 536870912 bytes (537 MB) copied, 6.61362 s, 81.2 MB/s which is slower than I would expect. Then, once I format the device with ext4, I get dd if=/dev/zero of=/mnt/data.tmp bs=512M count=1 oflag=direct 536870912 bytes (537 MB) copied, 9.60918 s, 55.9 MB/s This doesn't seem right. There must be some other factor playing into this that I'm not aware of. global_common.conf global { usage-count yes; } common { protocol C; } syncer { al-extents 1801; rate 33M; } data_mirror.res resource data_mirror { device /dev/drbd1; disk /dev/sdb1; meta-disk internal; on cluster1 { address 192.168.33.10:7789; } on cluster2 { address 192.168.33.12:7789; } } For the hardware I have two identical machines: 6 GB RAM Quad core AMD Phenom 3.2Ghz Motherboard SATA controller 7200 RPM 64MB cache 1TB WD drive The network is 1Gb connected via a switch. I know that a direct connection is recommended, but could it make this much of a difference? Edited I just tried monitoring the bandwidth used to try to see what's happening. I used ibmonitor and measured average bandwidth while I ran the dd test 10 times. I got: avg ~450Mbits writing to ext4 avg ~800Mbits writing to raw device It looks like with ext4, drbd is using about half the bandwidth it uses with the raw device so there's a bottleneck that is not the network.

    Read the article

  • Quantifying the effects of partition mis-alignment

    - by Matt
    I'm experiencing some significant performance issues on an NFS server. I've been reading up a bit on partition alignment, and I think I have my partitions mis-aligned. I can't find anything that tells me how to actually quantify the effects of mis-aligned partitions. Some of the general information I found suggests the performance penalty can be quite high (upwards of 60%) and others say it's negligible. What I want to do is determine if partition alignment is a factor in this server's performance problems or not; and if so, to what degree? So I'll put my info out here, and hopefully the community can confirm if my partitions are indeed mis-aligned, and if so, help me put a number to what the performance cost is. Server is a Dell R510 with dual E5620 CPUs and 8 GB RAM. There are eight 15k 2.5” 600 GB drives (Seagate ST3600057SS) configured in hardware RAID-6 with a single hot spare. RAID controller is a Dell PERC H700 w/512MB cache (Linux sees this as a LSI MegaSAS 9260). OS is CentOS 5.6, home directory partition is ext3, with options “rw,data=journal,usrquota”. I have the HW RAID configured to present two virtual disks to the OS: /dev/sda for the OS (boot, root and swap partitions), and /dev/sdb for a big NFS share: [root@lnxutil1 ~]# parted -s /dev/sda unit s print Model: DELL PERC H700 (scsi) Disk /dev/sda: 134217599s Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 63s 465884s 465822s primary ext2 boot 2 465885s 134207009s 133741125s primary lvm [root@lnxutil1 ~]# parted -s /dev/sdb unit s print Model: DELL PERC H700 (scsi) Disk /dev/sdb: 5720768639s Sector size (logical/physical): 512B/512B Partition Table: gpt Number Start End Size File system Name Flags 1 34s 5720768606s 5720768573s lvm Edit 1 Using the cfq IO scheduler (default for CentOS 5.6): # cat /sys/block/sd{a,b}/queue/scheduler noop anticipatory deadline [cfq] noop anticipatory deadline [cfq] Chunk size is the same as strip size, right? If so, then 64kB: # /opt/MegaCli -LDInfo -Lall -aALL -NoLog Adapter #0 Number of Virtual Disks: 2 Virtual Disk: 0 (target id: 0) Name:os RAID Level: Primary-6, Secondary-0, RAID Level Qualifier-3 Size:65535MB State: Optimal Stripe Size: 64kB Number Of Drives:7 Span Depth:1 Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU Current Cache Policy: WriteThrough, ReadAdaptive, Direct, No Write Cache if Bad BBU Access Policy: Read/Write Disk Cache Policy: Disk's Default Number of Spans: 1 Span: 0 - Number of PDs: 7 ... physical disk info removed for brevity ... Virtual Disk: 1 (target id: 1) Name:share RAID Level: Primary-6, Secondary-0, RAID Level Qualifier-3 Size:2793344MB State: Optimal Stripe Size: 64kB Number Of Drives:7 Span Depth:1 Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU Current Cache Policy: WriteThrough, ReadAdaptive, Direct, No Write Cache if Bad BBU Access Policy: Read/Write Disk Cache Policy: Disk's Default Number of Spans: 1 Span: 0 - Number of PDs: 7 If it's not obvious, virtual disk 0 corresponds to /dev/sda, for the OS; virtual disk 1 is /dev/sdb (the exported home directory tree).

    Read the article

  • MegaCli newly created disk doesn't appear under /dev/sdX

    - by Henry-Nicolas Tourneur
    After having successfully added 2 new disks in a new RAID virtual drive (background initialization done), I would have exepected it to appear under /dev/sdh but it's not there (so, unusable). The system is running a CentOS 5.2 64 bits, HAL and udev daemons are running, not records of any sdh apparition under the messsage log file or in dmesg, only MegaCli do see that virtual drive. Any idea ? Some data: [root@server ~]# ./MegaCli -LDInfo -LALL -a0 Adapter 0 -- Virtual Drive Information: Virtual Disk: 0 (target id: 0) Name: RAID Level: Primary-1, Secondary-0, RAID Level Qualifier-0 Size:139392MB State: Optimal Stripe Size: 64kB Number Of Drives:2 Span Depth:1 Default Cache Policy: WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU Current Cache Policy: WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU Access Policy: Read/Write Disk Cache Policy: Disk's Default Virtual Disk: 1 (target id: 1) Name: RAID Level: Primary-1, Secondary-0, RAID Level Qualifier-0 Size:285568MB State: Optimal Stripe Size: 64kB Number Of Drives:2 Span Depth:1 Default Cache Policy: WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU Current Cache Policy: WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU Access Policy: Read/Write Disk Cache Policy: Disk's Default [root@server ~]# ls -l /dev/disk/by-id/scsi-360* lrwxrwxrwx 1 root root 9 Nov 17 2010 /dev/disk/by-id/scsi-36001ec90f82fe100108ca0a704098d09 -> ../../sda lrwxrwxrwx 1 root root 10 Nov 17 2010 /dev/disk/by-id/scsi-36001ec90f82fe100108ca0a704098d09-part1 -> ../../sda1 lrwxrwxrwx 1 root root 10 Nov 17 2010 /dev/disk/by-id/scsi-36001ec90f82fe100108ca0a704098d09-part2 -> ../../sda2 lrwxrwxrwx 1 root root 9 Nov 17 2010 /dev/disk/by-id/scsi-36090a028e0fe07e78f94940c0000a0ee -> ../../sdf lrwxrwxrwx 1 root root 10 Nov 17 2010 /dev/disk/by-id/scsi-36090a028e0fe07e78f94940c0000a0ee-part1 -> ../../sdf1 lrwxrwxrwx 1 root root 9 Nov 17 2010 /dev/disk/by-id/scsi-36090a028e0fe972a3f91240a0000005f -> ../../sdb lrwxrwxrwx 1 root root 10 Nov 17 2010 /dev/disk/by-id/scsi-36090a028e0fe972a3f91240a0000005f-part1 -> ../../sdb1 lrwxrwxrwx 1 root root 9 Nov 17 2010 /dev/disk/by-id/scsi-36090a028e0fea7e18f94640c000020ec -> ../../sde lrwxrwxrwx 1 root root 10 Nov 17 2010 /dev/disk/by-id/scsi-36090a028e0fea7e18f94640c000020ec-part1 -> ../../sde1 lrwxrwxrwx 1 root root 9 Nov 17 2010 /dev/disk/by-id/scsi-36090a028e0feb7da8f94340c0000203d -> ../../sdd lrwxrwxrwx 1 root root 10 Nov 17 2010 /dev/disk/by-id/scsi-36090a028e0feb7da8f94340c0000203d-part1 -> ../../sdd1 lrwxrwxrwx 1 root root 9 Nov 17 2010 /dev/disk/by-id/scsi-36090a028e0fed7d78f94040c000080b7 -> ../../sdc lrwxrwxrwx 1 root root 10 Nov 17 2010 /dev/disk/by-id/scsi-36090a028e0fed7d78f94040c000080b7-part1 -> ../../sdc1 lrwxrwxrwx 1 root root 9 Nov 17 2010 /dev/disk/by-id/scsi-36090a05830145e58e0b9c479000010a1 -> ../../sdg lrwxrwxrwx 1 root root 10 Nov 17 2010 /dev/disk/by-id/scsi-36090a05830145e58e0b9c479000010a1-part1 -> ../../sdg1

    Read the article

  • Using Twitter to accept messages for a web app

    - by Jon
    I'd like my web app to be able to accept direct messages via twitter. My app won't be sending our any spam, in fact it won't send out any messages at all. It will essentially be a bot controlled account as I would only access the account via an API to check for direct messages. Is this kind of usage permitted within twitter's t & c? Thanks.

    Read the article

  • Wifi channel interference

    - by artfulrobot
    In my neighbourhood there are: 11 wifi signals on channel 1 2 wifi signals on channel 4 (including mine at the mo) 8 on channel 6 6 on channel 11 According to the diagram on wikipedia Mine on channel 4 will suffer interference from channel 1 and channel 6, so a total of 20 other networks(!). So would I be better to join channel 11, even though my network is then in direct competition with the 6 others? I suppose the question is: what's worse: direct interference (meaning that on the same channel) from 6 or fringe interference from many more networks?

    Read the article

  • How to configure Java Network Proxy Settings for domain computers

    - by adminParsed
    I need to set the Network Proxy Settings to Direct Connection, for computers on our domain. I have looked at the unattended setup configurations, as well as the deployment.properties file, and didn't see an option to set it to Direct Connection. Are there any alternate means to set this? ex...logon script, vbscript, powershell, GPO, (would be great, but I couldn't find anything on this) Thanks

    Read the article

  • Alternatives to connect to ORACLE database server without install the Oracle client.

    - by Salvador
    i am looking for an Delphi component to connect to an ORACLE database server in an direct way without install the oracle client. i knew the Oracle Data Access (ODAC) from DevArt. there are any other component with this capability? ODAC offers two connection modes to the Oracle server: connection through the Oracle Call Interface in Client mode and direct connection over TCP/IP in Direct mode. ODAC-based database applications are easy to deploy, do not require installation of other data provider layers. Thanks in advance.

    Read the article

  • Reset after using a link with parameters

    - by CarolinaJay65
    I am using the window.location.search parameters (www.mysite.com?page=1) to direct the user to a specific page within the site. However, since those parameters are still in window.location (the browser) reset button continues to re-direct to the same page. I would like the reset button to re-direct to www.mysite.com How do I clear the .search parameters so the (browser) reset button re-directs where I want? Is it done after the page is loaded? or after the (browser) reset button has been clicked?

    Read the article

  • Using Subversion with SQL Server Management Studio

    - by Mike
    I am a member of a team with 3 developers. We have started using Redmine here for project management and issue tracking and LOVE it. I have seen elsewhere how nicely Redmine can work when a back-end repository is set up for a project. There is nice integration all around. This shop is currently .Net and SQL Server 2005. I am thinking about recommending a move to Subversion for our VCS (so that we can integrate with Redmine). I have seen a product called VisualSVN which will make it possible to use Visual Studio with Subversion, so that covers .Net. But the other big question is if it is possible to configure SQL Server Management Studio to somehow use Subversion for its VCS. Has anyone done this? This shop is currently using Sourcegear Fortress.

    Read the article

  • Where can I compare monitors with a given VESA mount?

    - by Dan Rasmussen
    I am looking into purchasing a dual-monitor setup, and need to purchase two monitors with VESA MIS-D mounts. My only problem is that that information doesn't seem to be readily available on most shopping websites. Neither Amazon nor Newegg seem to have the information searchable or filterable. I could shop for monitors, then Google around to see if they support VESA MIS-D, but is there a better way? Is there a resource (not necessarily a store - once I find a monitor I can shop elsewhere) where I can browse a variety of monitor specs and reviews while only looking at monitors with a certain VESA mount?

    Read the article

< Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >