Search Results

Search found 27011 results on 1081 pages for 'buy vs build'.

Page 128/1081 | < Previous Page | 124 125 126 127 128 129 130 131 132 133 134 135  | Next Page >

  • Drupal administration theme doesn't apply to Blocks pages (admin/build/block)

    - by hfidgen
    A site I'm creating for a customer in D6 has various images overlaying parts of the main content area. It looks very pretty and they have to be there for the general effect. The problem is, if you use this theme in the administration pages, the images get in the way of everything. My solution was to create a custom admin theme, based on the default one, which has these image areas disabled in the output template files - page.tpl.php The problem is that when you try and edit the blocks page, it uses the default theme and half the blocks admin settings are unclickable behind the images. I KNOW this is by design in Drupal, but it's annoying the hell out of me and is edging towards "bug" rather than "feature" in my mind. It also appears that there is no way of getting around it. You can edit /modules/blocks/block.admin.inc to force Drupal to show the blocks page in the chosen admin theme. BUT whichever changes you then make will not be transferred to the default theme, as Drupal treats each theme separately and each theme can have different block layouts. :x function block_admin_display($theme = NULL) { global $custom_theme; // If non-default theme configuration has been selected, set the custom theme. // $custom_theme = isset($theme) ? $theme : variable_get('theme_default', 'garland'); // Display admin theme $custom_theme = variable_get('admin_theme', '0'); // Fetch and sort blocks $blocks = _block_rehash(); usort($blocks, '_block_compare'); return drupal_get_form('block_admin_display_form', $blocks, $theme); } Can anyone help? the only thing I can think of is to push the $content area well below the areas where the image appear and use blocks only for content display. Thanks!

    Read the article

  • scp vs netatalk, samba, and/or vsftpd with External USB drive

    - by KitsuneYMG
    I set up a ubuntu server machine to share an ext2 formatted external usb drive. When attempting to copy a single 275MB files from said device through netatalk, I get estimated download rates at around 45 min. With samba and ftp (using vsftpd) I get 1+ hours! Using scp to copy the file results in complete download within 5 minutes. Another option, ssh+cp from external device to ~ and then using netatalk to grab it from there results in a total time of arounf 7 minutes. Does anyone have a clue what is misconfigured? Assuming that nothing is, is there any fs/pseudo-fs that would use the internal hdd as an intermediate location/onion-layer for the external hdd (for reads only)? Details: AppleVolumes.default: /mnt/ext USB allow:username cnidscheme:cdb options:usedots,upriv

    Read the article

  • ASP.NET Build Images Links Dynamically

    - by coson
    Good Day, I am developing a website that has product images on an external server. I have code that tests to see if the image exists like (pseudo code): DynamicString = FunctionThatCreatesDynamicString() ' DynamicString = "http://external_server/path/to/file1.jpg" If ImageExists(DyanmicString) = StatusCode.200 Then ' Embed link in ASP.NET page Else ' Embed not found image in ASP.NET page End If My code builds fine and appears to execute. The problem occurs when I attempt to view the external link in a browser, the image appears properly (I have to authenticate first, but that's OK considering I'm on an internal network and this app will be used internally). However, when I attempt the view the source in my generated HTML page, I am seeing the image to the "Not Found" image when I know the image is there. I compared all the characters in my dynamically assembled to the external link and all the characters are matching up correctly. I'm wondering if the authentication has anything to do with why the image is not rendering properly on my rendered HTML. Any thoughts? TIA, coson

    Read the article

  • Mutliple VMs for Tomcat cluster vs Multiple Tomcat instances on one physical box

    - by Greymeister
    I'm working on a project that will be implemented into production using a cluster of Apache Tomcat instances and I'm looking for the best Hardware/OS solutions and VMs have come up as one option. I have run ESXi/ESX instances before for development and testing, but I'm curious for a hosting environment if having multiple VMs is actually worse than just configuring a server to host multiple instances of Tomcat. These are my guesses: Pros for VMWare Easier Maintenance/Backup for individual VMs (VMWare makes this easy) Can remote login to individual VMs without having to give host access (security?) Easier way to re-purpose machine for OS/Hardware changes Pros for running on one Physical Machine Overhead of only one OS (also no VMWare footprint) Update OS/security changes once One less administrative layer (No VM expertise required) I'm curious if anyone has any other ideas about what the benefits would be for either option.

    Read the article

  • 32 vs 64-bit software for same machine?

    - by GorillaSandwich
    What is the difference between 32 and 64-bit software? My understanding is that 64-bit can use more RAM, if it's available, because it has a larger address space for it. Is this correct? And, specifically: If I have a 64-bit operating system with lots of RAM, and I install, say, the 32-bit version of MySQL instead of the 64-bit version, will it be unable to use all the available RAM and therefore run slower than the 64-bit version might on the same machine (assuming RAM becomes the bottleneck before processing speed or disk access speed or whatever)? If I have a 32-bit operating system and I install a 64-bit piece of software on it, will it (probably) fail to run?

    Read the article

  • File store: CouchDB vs SQL Server + file system

    - by Andrey
    I'm exploring different ways of storing user-uploaded files (all are MS Office documents or alikes) on our high load web site. It's currently designed to store documents as files and have a SQL database store all metadata for those files. I'm concerned about growing out of the storage server and SQL server performance when number of documents reaches hundreds of millions. I was reading a lot of good information about CouchDB including its built-in scalability and performance, but I'm not sure how storing files as attachments in CouchDB would compare to storing files on a file system in terms of performance. Anybody used CouchDB clusters for storing LARGE amounts of documents and in high load environment?

    Read the article

  • How do I tell nant to only call csc when there are cs files in to compile?

    - by rob_g
    In my NAnt script I have a compile target that calls csc. Currently it fails because no inputs are specified: <target name="compile"> <csc target="library" output="${umbraco.bin.dir}\Mammoth.${project::get-name()}.dll"> <sources> <include name="project/*.cs" /> </sources> <references> </references> </csc> </target> How do I tell NAnt to not execute the csc task if there are no CS files? I read about the 'if' attribute but am unsure what expression to use with it, as ${file::exists('*.cs')} does not work. The build script is a template for Umbraco (a CMS) projects and may or may not ever have .cs source files in the project. Ideally I would like to not have developers need to remember to modify the NAnt script to include the compile task when .cs files are added to the project (or exclude it when all .cs files are removed).

    Read the article

  • Internet Troubles - PPPoE vs PPPoA?

    - by AkkA
    I have been having some internet troubles at home (ADSL2+ connection in Australia). We get random drop-outs from the authentication connection. It will keep the connection to the DSL service, but we lose authentication and either have to restart the router/modem (its combined, a Belkin one, not sure on model number) or unplug the phone cable, wait about 30 seconds and plug it in again. I've called the ISP (Telstra) a few times, but they only offer limited support when we dont use their supported hardware. Apparently something had happened on their side, they checked the box again (at least it sounded that simple), and told me it would be fine. It wasnt. I've replaced all the filters around the house, but that didnt help either. We do live a little bit away from the exchange (get a sync speed of about 3000/900), so I thought it could be due to line noise but that hasnt helped. Telstra allow both PPPoE and PPPoA connections (which I'm configuring through my router, dont have software on the PC side). I've been running PPPoA the whole time, would it make any difference changing it to PPPoE? If not, are there any other theories as to why we would be experiencing these drop-outs? It has been fine for at least 12 months, then suddenly started about 2 months ago.

    Read the article

  • JAVA: Build XML document using XPath expressions

    - by snoe
    I know this isn't really what XPath is for but if I have a HashMap of XPath expressions to values how would I go about building an XML document. I've found dom-4j's DocumentHelper.makeElement(branch, xpath) except it is incapable of creating attributes or indexing. Surely a library exists that can do this? Map xMap = new HashMap(); xMap.put("root/entity/@att", "fooattrib"); xMap.put("root/array[0]/ele/@att", "barattrib"); xMap.put("root/array[0]/ele", "barelement"); xMap.put("root/array[1]/ele", "zoobelement"); would result in: <root> <entity att="fooattrib"/> <array><ele att="barattrib">barelement</ele></array> <array><ele>zoobelement</ele></array> </root>

    Read the article

  • Router vs switch in a LAN [closed]

    - by servernewbie
    If I have a LAN and and connect it with a switch, I understand it uses a CAM table to route packets in layer 2 (by saving mac to port relations). So far all good. However, when using a router for a LAN (ONLY for a LAN, not to connect it to "the outside" WAN/internet/etc) I get a bit confused as to how it internally processes packets. I would first split this into two router scenarios: Router with buit-in switch In this scenario, I would expect that it will act exactly as a switch with a CAM table internally. This would probably benefit a bit in speed (guessing here?) compared to the next option. Router without built-in switch Here is where I get confused. If hostA wants to send a packet to hostB, it will ARP to find hostB's MAC address and send it there. Now, if we had a switch (above scenario) this would be easy. But how does it work now in a router WITHOUT a switch? If I would guess, hostA would send an Ethernet frame with hostB's MAC address to the line. The router would fetch the packet (even though the router has another MAC address, it would still fetch this packet even if it only contains hostB's MAC address). It would strip the Ethernet frame header and check the IP, and then check its own internal ARP table again for the MAC address. Now, this would seem like a waste of resources compared to a router with a built-in switch. But maybe it does not work like that at all. Does it also contain a CAM table? If that would be true, what would then the difference between these two routers really be?

    Read the article

  • Using terminal vs KDE in linux?

    - by Ke
    Im used to using nautilus within centos but have recently just got a VPS and quickly realising that using a KDE is unacceptable in this environment. Although I do find it so much quicker doing things like folder permissions in KDE rather than typing it all out in the terminal? Everyone I speak to says, use the terminal and I should learn this way as opposed to using the KDE, but theres certain things I just dont get How is it possible to make quick changes to scripts and viewing them in a browser etc , without a mouse or using KDE? and only using a terminal?? I am wondering how to develop websites just using the terminal??? How can it be quicker to type out/view permissions etc in the terminal when its instant and just a few clicks in the KDE? Any thoughts are much appreciated. I would love to understand the benefits but just cant seem to see them right now. Cheers Ke.

    Read the article

  • Do I need to have a company so that I can buy an SSL certificate that will display green at the address bar?

    - by André Pena
    I have a non-comercial website in which the users store some sensitive information so I feel the need to have a SSL certificate, but it seems that if I don't have a registered company I can't buy a green certificate. I have some related questions: Is it true that if I don't have a company, I can't have a green certificate? If I issue a standard (non-business) certificate that won't go green (from GoDadday, for instance), will it go red? Or will it have a less ugly display.. something more neutral that won't scary the user.

    Read the article

  • Amazon EC2- many micro-instances vs single small/medium instance

    - by shashankaholic
    I have a chat application using stack of Openfire, Tomcat6 and MySQL. Currently, i have installed all these servers on single Linux micro-instance(613 MB memory). Even in low user base 10-20 i am encountering CPU overload which is quite obvious here. As, i am new to Amazon EC2 can somebody suggest me how to scale up my architecture according to traffic use? should i use separate micro instances for every app server(openfire,mysql,tomcat6) should i use single small or medium instance for whole server stack. Some factors in context: high reliance on MYSQL high memory usage due to file transfer web-application interacting with other Amazon service like S3,SES

    Read the article

  • Outlook.com Sweep rule vs Filter rule

    - by Mahesha999
    New outlook.com introduced new sweep option. With it, you can create a filter to move messages from particular sender to a particular folder after certain days the mail comes. This rules gets added to the Filter list. However we cannot edit this rule by clicking Edit in front of it in that list. The dialog appears saying you should delete the rule and create the new one. I don't understand why sweep rules are considered differently from Filter rules? I have 68 filter rules. What if I want to add such time clause to these filter rules so that instead of moving the incoming mails immediately to a certain folder, they will be kept in Inbox for say 10 days and then they will be moved. Such time clause cannot be added to the filter rules. So should I delete all these 68 filters and create new Sweep rules?

    Read the article

  • Nginx vs Apache as reverse proxy, which one to choose

    - by mhd
    Hi, this kind of question maybe has been asked here but I couldn't find any that really match my question. Heard that nginx performance is quite impressive, but Apache has more docs, community(read:expert) to get help Now what I want to know, how both web servers compare in term of performance, easiness of config, level of customization,etc. AS REVERSE PROXY server in a vps environment?? I'm still weighing between the two for a ruby web app(not ROR) served with thin server. Specific answer will be much appreciated. General answer not touching the ruby part is okay. I'm still noob in web server administration.

    Read the article

  • Amazon EC2- micro-instance vs single small/medium instance

    - by shashankaholic
    I have a chat application using stack of Openfire, Tomcat6 and MySQL. Currently, i have installed all these servers on single Linux micro-instance(613 MB memory). Even in low user base 10-20 i am encountering CPU overload which is quite obvious here. As, i am new to Amazon EC2 can somebody suggest me how to scale up my architecture according to traffic use? should i use separate micro instances for every app server(openfire,mysql,tomcat6) should i use single small or medium instance for whole server stack. Some factors in context: high reliance on MYSQL high memory usage due to file transfer web-application interacting with other Amazon service like S3,SES

    Read the article

  • Tape vs SSDs backups regarding long-term storage reliability

    - by user66131
    My question is very specifically about solid state drives, not regular hard drives. I would like to put in place a grandfather-father-son backup scheme, with the SSDs being used for the grandfather and father portions, and the yearly grandfather would be locked in a safe offsite for maybe 5-10 years. Can I expect that after this period of time the data would be preserved as well as it would be on a tape?

    Read the article

  • Windows Server vs Sharing

    - by Mark Lawrence
    Not sure if this is the right place for this question. I have a friend that owns a small business running 3 local machines, that are connected to the internet, and a server that each local machine connects to. He has recently bought a newer server, and by server I mean a Windows Vista box. He wants to use this purely to store data that the 3 local machines can access, kind of like a glorified external hard drive. Im suggesting to fore-go the server option and simply setup sharing on the 'server' box. Interested in hearing any suggestions as to whether the server idea is preferable?

    Read the article

  • Virtualize SBS 2003 - P2V vs migrating to new VM

    - by jlehtinen
    I need to virtualize a SBS 2003 server in my work environment. I need some tips on what people think is the best way to proceed. Background: The SBS 2003 server is the primary DC for the domain and also hosts FTP, RRAS(VPN), DNS, and file shares. Exchange is NOT used, neither is SQL server. DHCP is done via a firewall appliance. I have added a Server 2003 VM to the domain and promoted it to the DC role. AD/DNS is replicating here correctly. This was mainly done to provide fault-tolerance to the domain, I was not intending to make this VM the primary DC. I've already asked about buying upgraded licensing for Server 2008/2012 but was refused due to cost. Options: I see (at least) two routes I could take to complete this. From what I've read option 2 is the "preferred" method, but there's a few steps where I'm not clear on what to expect. Option 1.) P2V the primary DC Power off primary DC Power off secondary DC (to prevent USN rollback in case P2V has issue) P2V (cold clone) primary DC Boot new PDC VM Allow new hardware to detect Remove old NIC hardware from device manager Assign old IPs to new virtual NICs Reboot PDC VM, confirm connectivity and no major issues Power on secondary DC, confirm replication Option 2.) Create new VM, transfer roles, remove original DC from domain Create new VM, install SBS 2003 Do I need the original SBS install discs for this? MS migration doc mentions this. Add VM to domain, promote to DC role Does this start 7 day timer where two SBS servers can be in same domain? Set up RRAS on new VM Set up IIS/FTP on new VM Move file shares to new VM Transfer FSMO roles to new VM DC dcpromo original primary DC out of domain

    Read the article

  • how do i implement / build / create an 'in memory database' for my unit test

    - by Michel
    Hi all, i've started unit testing a while ago and as turned out i did more regression testing than unit testing because i also included my database layer thus going to the database verytime. So, implemented Unity to inject a fake database layer, but i of course want to store some data, and the main opinion was: "create an in-memory database" But what is that / how do i implement that? Main question is: i think i have to fake the database layer, but doesn't that make me create a 'simple database' myself or: how can i keep it simple and not rebuilding Sql Server just for my unit tests :) At the end of this question i'll give an explanation of the situation i got in on the project i just started on, and i was wondering if this was the way to go. Michel Current situation i've seen at this client is that testdata is contained in XML files, and there is a 'fake' database layer that connects all the xml files together. For the real database we're using the entity framework, and this works very simple. And now, in the 'fake' layer, i have top create all kind of classes to load, save, persist etc. the data. It sounds weird that there is so much work in the fake layer, and so little in the real layer. I hope this all makes sense :)

    Read the article

  • Big problems on iPhone ad hoc build -

    - by phil swenson
    No matter what I do I can't get my ad hoc provisioning profile to work. In Organizer, I always get "A valid signing identity matching this profile cannot be found in your keychain" for my adhoc profile. I have my distribution cert installed in my login keychain. I dragged the adhoc mobileprovision file to XCode... that's pretty much all there is to it, right? I searched around and found suggestions like re-creating the cert/profile. Did that, same thing. Also make sure your login keychain is default. It is. Even tried a different computer. Same result. In XCode AdHoc target I don't have a distribution target to pick. This all used to work, but obviously I messed something up..... Perhaps my process is just wrong (it's been months since I did this). Does someone have a step by step list of how to set up for adhoc distribution?

    Read the article

  • Multiple servers vs 1 big server performace

    - by pistacchio
    Hi to all! My team of developers has suggested a server structure for an upcoming project we are developing. Our structure is "logical", meaning that the various logical components of the application (it is a distributed one) relies on different servers. Some components are more critical than others and will be subjected to more load. Our proposal was to have 1 server per component but the hardware guys suggested to replace the various machines with a single, bigger one with virtual servers. They're gonna use Blade Servers. Now, I'm not an expert at all, but my question to the guys was: so if we need, for example, 3 2GHz CPU / 2GB RAM machines and you give me 1 machine with 3 2GHz CPUs and 6 GB of RAM it is the same? They told me it is. Is this accurate? What are the advantages or disadvantages of both the solutions? What are the generally accepted best practices? Could you point out some URL reference dealing with the problem? Thank you in advance! EDIT: Some more info. The (internet / intranet) application is already layered. We have some servers on the DMZ that will expose pages to the internet and the databases are on their own machines. What we want to split (and they want to join) are some webservers that mainly expose webservices. One is a DAL that communicates with the database layer, one is our Single Sign On / User Profile application that gets called once per page and one is a clone of what seen on the Internet to be used on our lan.

    Read the article

  • problem with doctrine:build-schema

    - by Ying
    Hi, im using symfony with doctrine, and Im trying to generate the schema yml file from a db. I get an this error SQLSTATE[42000]: Syntax error or access violation: 1064 You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'group' at line 1. Failing Query: "DESCRIBE group"I have a table named group in my database, and I suspect that this is screwing it up. Its not access problems as ive checked my db previleges. Any suggestions on what I can do? I have tried the quote-identifier attribute but with no luck : ( Im also struggling to find some good doctrine documentation. i cant seem to find where a list of attributes are, say for a creating a column in the schema.yml file. I tried reaching out to symfony forums but they are not responsive! Any help would be greatly appreciated! Thanks..

    Read the article

< Previous Page | 124 125 126 127 128 129 130 131 132 133 134 135  | Next Page >