Search Results

Search found 13645 results on 546 pages for 'bugz r us'.

Page 309/546 | < Previous Page | 305 306 307 308 309 310 311 312 313 314 315 316  | Next Page >

  • How do I rewrite *.example.com to www.example.com?

    - by Lekensteyn
    In my network, I've some Ubuntu machines which need to download files from nl.archive.ubuntu.com. Since it's quite a waste of time to download everything multiple times, I've setup a squid proxy for caching the data. Another use for this proxy was rewriting requests for archive.ubuntu.com or *.archive.ubuntu.com to nl.archive.ubuntu.com because this mirror is faster than the US mirrors. This has worked quite well, but after a recent install of my caching machine, the configuration was lost. I remember having a separate perl program for handling this rewrite. How do I setup such a squid proxy which rewrites the host *.example.com to www.example.com and cache the result of the latter?

    Read the article

  • Sharding / indexing strategy for multi-faceted search

    - by Graham
    I'm currently thinking about our database structure and how we modify it for scale. Specifically, we're thinking about using ElasticSearch to provide our search functionality. One common pattern with ElasticSearch seems to be the 'user-routing' pattern; that is, using routing to ensure that any one user's data resides on the same shard. This is great for client-specific search e.g. Gmail. Our application has a constraint such that any user will have a maximum of a few thousand documents, so this pattern seems like a good candidate. However, our search needs to work across all users, as well as targeting a specific user (so I might search my content, Alice's content, or all content). Similarly, we need to provide full-text search across any timeframe; recent months to several years ago. I'm thinking of combining the 'user-routing' and 'index-per-time-interval' patterns: I create an index for each month By default, searches are aliased against the most recent X months If no results are found, we can search against previous X months As we grow, we can reduce the interval X Each document is routed by the user ID So, this should let us do the following: search by user. This will search all indeces across 1 shard search by time. This will search ~2 indeces (by default) across all shards Is this a reasonable approach, considering we may scale to multi-million+ documents? Or should I be denormalizing the data somehow, so that user searches are performed on a totally seperate index from date searches? Thanks for any pros-cons of the above scenario.

    Read the article

  • backface culling error

    - by acrilige
    I write simple software renderer. In my pipeline i have stage of backface culling. But looks like it has some error (see picture). I perform culling right after world transformation. (i can't insert picture in post coz i don't have enough points, so i just upload it (cube model): http://imageshack.us/photo/my-images/705/bcerror.png/) Vector3F view_dir(0.0f, 0.0f, 1.0f); std::vector<Triangle> to_remove; for (Triangle &t : m_triangles) { Vector4F e1 = t.v2 - t.v1; Vector4F e2 = t.v3 - t.v1; Vector3F normal( e1.y * e2.z - e1.z * e2.y, e1.z * e2.x - e1.x * e2.z, e1.x * e2.y - e1.y * e2.x ); normal.Normalize(); float dot = Dot(view_dir, normal); if (dot <= 0) to_remove.push_back(t); } for (Triangle& t : to_remove) m_triangles.erase(std::remove(m_triangles.begin(), m_triangles.end(), t), m_triangles.end()); Camera sits in origin and points in screen (RH). What is the reason?

    Read the article

  • How to store prices that have effective dates?

    - by lal00
    I have a list of products. Each of them is offered by N providers. Each providers quotes us a price for a specific date. That price is effective until that provider decides to set a new price. In that case, the provider will give the new price with a new date. The MySQL table header currently looks like: provider_id, product_id, price, date_price_effective Every other day, we compile a list of products/prices that are effective for the current day. For each product, the list contains a sorted list of the providers that have that particular product. In that way, we can order certain products from whoever happens to offer the best price. To get the effective prices, I have a SQL statement that returns all rows that have date_price_effective >= NOW(). That result set is processed with a ruby script that does the sorting and filtering necessary to obtain a file that looks like this: product_id_1,provider_1,provider_3,provider8,provider_10... product_id_2,provider_3,provider_2,provider1,provider_10... This works fine for our purposes, but I still have an itch that a SQL table is probably not the best way to store this kind of information. I have that feeling that this kind of problema has been solved previously in other more creative ways. Is there a better way to store this information other than in SQL? or, if using SQL, is there a better approach than the one I'm using?

    Read the article

  • Can qmail-ldap replace the validrcptto file?

    - by T. Fabre
    We are using qmail to route incoming mail to our Domino server. However, that requires us to maintain the validrcptto with the list of all allowed email addresses. Since Domino provides an LDAP directory, does qmail-ldap provide functionnality to lookup valid rcpt to addresses in the Domino directory instead of the validrcptto file, so that we wouldn't have to maintain that extra list ? We have about 150~200 users, so is setting up qmail-ldap worth the extra mile if it can verify addresses in the LDAP directory ? If anyone has experience with qmail-ldap and its setup, I'd be glad to hear from you.

    Read the article

  • Converting Mercurial Repository To Sub Version (SVN)

    - by alexganose
    This may seem like a very strange situation. Initially we were using subversion (SVN) for version control... then we moved to mercurial and used a tool to convert our previous commits to a mercurial form. And now we want to move back to sub version however we can't seem to find anything that will allow us to keep out history from the mercurial commits and keep them in subversion form. Does anyone know if this is possible/ how to go about doing it? Thanks!!

    Read the article

  • How to bulk mail-enable contacts from AD in Exchange 2007?

    - by George Hewitt
    Hello, We have several thousand 'contacts' setup in AD already for a faxing system. We're migrating to an online fax provider that uses e-mail rather than plain old telephone. So, we've bulk edited all the AD records so that the 'mail' attribute is populated with the right e-mail address in the right format. Now, how do we enable these contacts within Exchange 2007? I've looked through http://technet.microsoft.com/en-us/library/bb684891.aspx but that only seems to talk about manually editing the CSV output to specify the external addresses. AD already knows the external e-mail addresses - I just need the info in Exchange! Any thoughts?

    Read the article

  • Leveraging NuGet as a central repository for PowerShell modules

    - by cibrax
    We have been working a lot lately with PowerShell as part of our star product at Tellago Studios, “Moesion”. One of the main features we provide in Moesion is the ability to execute PowerShell commands remotely in a given server using a web mobile interface (You can read more in my previous post about Moesion). One of the things we realized in all this time is that PowerShell lacks of a central repository where IT guys or we, the developers, can easily grab and reuse commands.  All the commands or modules are basically spread across multiple places or websites, like personal blogs, TechNet or CodePlex projects to name a few making the search of them very hard. You are usually limited to use your favorite search engine and copy what you find. In addition, there is not an easy way to reuse, extend or version these commands, which also limits any contribution that you could make to the community.  My friend Jose wrote a great post the other day about the importance of reusing PowerShell modules, and what is the mechanism to reuse them. Jose, however, based his post in a custom implementation using a GIT repository for storing the modules. We have NuGet in the .NET platform for sharing and reusing existing libraries or code, so why can’t just leverage it for reusing PowerShell modules as well ?. Some teams in Microsoft are using NuGet for distributing libraries and binaries so it would be a great thing for all of us if they also distribute the scripting interfaces in PowerShell using NuGet. This applies to the .NET OS community as well. In fact, it looks like Andrew Nurse had the same idea and implemented a project for this in BitBucket, PsGet.

    Read the article

  • Non-blocking ORM issues

    - by Nikolay Fominyh
    Once I had question on SO, and found that there are no non-blocking ORMs for my favorite framework. I mean ORM with callback support for asynchronous retrieval. The ORM would be supplied with a callback or some such to "activate" when data has been received. Otherwise ORM needs to be split of in a separate thread to guarantee UI responsiveness. I want to create one, but I have some questions that blocking me from starting development: What issues we can meet when developing ORM? Does word "non-blocking" before word "ORM" will dramatically increase complexity of ORM? Why there are not much non-blocking ORMs around? Update: It looks, that I have to improve my question. We have solutions that already allows us to receive data in non-blocking way. And I believe that not all companies that use such solutions - using raw SQL. We want to create more generic solution, that we can reuse in future projects. What difficulties we can meet?

    Read the article

  • how to remotely open an URL in Firefox in a specific profile?

    - by miernik
    I have several instances of Firefox with several different profiles running. Among them profiles with the names "software" and "test". I am trying to open an URL from a bash script to have it open in profile "test", like this: firefox -P "test" http://www.example.org/ However that opens it in profile "software" anyway. Any ideas? Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.1.8) Gecko/20100308 Iceweasel/3.5.8 (like Firefox/3.5.8) No, it is not a permissions problem, all my profile directories are perfectly under my permissions: root@przehyba:~/.mozilla# ls -ld firefox/ drwx------ 13 miernik miernik 4096 Mar 11 09:15 firefox/ root@przehyba:~/.mozilla# ls -ld firefox/* drwxr-xr-x 9 miernik miernik 4096 Mar 12 11:29 firefox/info -rw-r--r-- 1 miernik miernik 560 Mar 11 09:15 firefox/profiles.ini drwxr-xr-x 10 miernik miernik 4096 Mar 16 11:51 firefox/software drwxr-xr-x 9 miernik miernik 4096 Mar 11 09:14 firefox/tech drwxr-xr-x 11 miernik miernik 4096 Mar 15 22:48 firefox/test root@przehyba:~/.mozilla#

    Read the article

  • IIS 7.5 website application pool with 'full control' permissions hackable?

    - by Caroline Beltran
    Although I would never set this permission, I would like to know how a static html website with the permission mentioned in the title could be compromised. In my humble opinion, I would guess that this would pose no threat since a web visitor has no way to upload/edit/delete anything. What if the site was a simple PHP website that simply displayed ‘hello world’? What if this PHP site had a contact us form that was properly sanitized? Thank you EDIT: I should mention that restricting IIS to GET and POST requests only, otherwise people anybody can delete and upload content.

    Read the article

  • Understanding Asynchronous Programming with .NET Reflector

    - by Nick Harrison
    When trying to understand and learn the .NET framework, there is no substitute for being able to see what is going on behind at the scenes inside even the most confusing assemblies, and .NET Reflector makes this possible. Personally, I never fully understood connection pooling until I was able to poke around in key classes in the System.Data assembly. All of a sudden, integrating with third party components was much simpler, even without vendor documentation!With a team devoted to developing and extending Reflector, Red Gate have made it possible for us to step into and actually debug assemblies such as System.Data as though the source code was part of our solution. This maybe doesn’t sound like much, but it dramatically improves the way you can relate to and understand code that isn’t your own.Now that Microsoft has officially launched Visual Studio 2012, Reflector is also fully integrated with the new IDE, and supports the most complex language feature currently at our command: Asynchronous processing.Without understanding what is going on behind the scenes in the .NET Framework, it is difficult to appreciate what asynchronocity actually bring to the table and, without Reflector, we would never know the Arthur C. Clarke Magicthat the compiler does on our behalf.Join me as we explore the new asynchronous processing model, as well as review the often misunderstood and underappreciated yield keyword (you’ll see the connection when we dive into how the CLR handles async).Read more here

    Read the article

  • Cron Script to Delete Folder Contents Every 5 Minutes on Media Temple

    - by Brian Iannone
    I'm not familiar with server-side scripting, but I'm currently using a PHP application on Media Temple to cache JPEGs from four webcams hosted on a server located in the middle of the Indian Ocean. (Hence my reason for caching them in the US.) The webcams are updated every five minutes. The PHP application stores the cached images in http://static.rigic.co/cache/. I would like to create a cron script to automatically delete the contents of "cache" (not the folder itself; just the files inside) at a regular interval.

    Read the article

  • Radiation from a UPS

    - by Erel Segal
    In our office, there are frequent eletric shortages that harm my desktop computer, so I wanted to install a UPS. However, my office-mates pointed me to papers talking about hazardous radiation from the UPS. The UPS manufacturers themselves recommend to put the UPS several meters away from humans, which is not possible because our office is small (the power is about 0.5 meters from us). As an alternative to UPS, my office-mates recommended that I switch to a laptop, which has a battery so it's immune to shortages. I have several questions: Is it true that the radiation from a laptop battery is lower than the radiation from a UPS? They do just the same thing - supply power using a battery! If the answer to 1 is yes - is there an alternative way to attach a battery, similar to a laptop battery, to a desktop computer? If the answer to 1 is no - how can I prove this to my office-mates, so that they let me use UPS?

    Read the article

  • Rest client throw timeout exception

    - by shandu
    Hi, I have create REST client in C# using example on this page: http://msdn.microsoft.com/en-us/library/aa395208(v=vs.90).aspx. Server is built in PHP. When I send request to some urls I have this exception: The request channel timed out while waiting for a reply after 00:00:59.9531250. Increase the timeout value passed to the call to Request or increase the SendTimeout value on the Binding. The time allotted to this operation may have been a portion of a longer timeout. But, sometimes, when I debug code, I get response. How to solve this?

    Read the article

  • Rest client throw timeout exception

    - by shandu
    Hi, I have create REST client in C# using example on this page: http://msdn.microsoft.com/en-us/library/aa395208(v=vs.90).aspx. Server is built in PHP. When I send request to some urls I have this exception: The request channel timed out while waiting for a reply after 00:00:59.9531250. Increase the timeout value passed to the call to Request or increase the SendTimeout value on the Binding. The time allotted to this operation may have been a portion of a longer timeout. But, sometimes, when I debug code, I get response. How to solve this?

    Read the article

  • Solving &ldquo;XmlSchemaException: The global element '&lt;elementName&gt;' has already been declare

    - by ChrisD
    I recently encountered this error when I attempted to consume a new hosted WCF service.  The service used the Request/Response model and had been properly decorated.  The response and request objects were marked as DataContracts and had a specified namespace.   My WCF service interface was marked as a ServiceContract and shared the namespace attribute value.   Everything should have been fine, right? [ServiceContract(Namespace = "http://schemas.myclient.com/09/12")] public interface IProductActivationService { [OperationContract] ActivateSoftwareResponse ActivateSoftware(ActivateSoftwareRequest request); } well, not exactly.  Apparently the WSDL generator was having an issue: System.Xml.Schema.XmlSchemaException: The global element 'http://schemas.myclient.com/09/12:ActivateSoftwareResponse' has already been declared. After digging I’ve found the problem; the WSDL generator has some reserved suffixes for its entities, including Response, Request, Solicit (see http://msdn.microsoft.com/en-us/library/ms731045.aspx).  The error message is actually the result of a naming conflict.  The WSDL generator uses the namespace of the service to build its reserved types.  The service contract and data contract share a namespace, which coupled with the response/request name suffixes I was using in my class names, resulted in the SchemaException. The Fix: Two options: Rename my data contract entities to use a non-reserved keyword suffix (i.e.  change ActivateSoftwareResponse to ActivateSoftwareResp). or; Change the namespace of the data contracts to differ from the service contract namespace. I chose option 2 and changed all my data contracts to use a “http://schemas.myclient.com/09/12/data” namespace value. This avoided a name collision and I was able to produce my WSDL and consume my service.

    Read the article

  • linux migration/N high cpu consumption

    - by Alexander
    on my linux appliance based on 3.0.0-14 kernel I got: RPN:/tmp# ps axuf | grep migration root 6 92.9 0.0 0 0 ? S Apr23 2788:33 \_ [migration/0] root 7 99.7 0.0 0 0 ? S Apr23 2993:20 \_ [migration/1] my top is RPN:/tmp# top -b -n1 top - 12:03:41 up 2 days, 2:18, 5 users, load average: 25.76, 25.26, 24.73 Tasks: 171 total, 1 running, 168 sleeping, 0 stopped, 2 zombie Cpu(s): 14.0%us, 12.6%sy, 0.8%ni, 72.0%id, 0.3%wa, 0.0%hi, 0.3%si, 0.0%st Mem: 1543032k total, 1264728k used, 278304k free, 25308k buffers Swap: 0k total, 0k used, 0k free, 183168k cached My question: why processes "migration/N" take so much CPU?

    Read the article

  • Group policy doesn't let me execute Chrome (Win 7)

    - by George Katsanos
    where I work the admins just migrated us to Windows 7. They gave me admin rights but still I had to "run as administrator" my Google Chrome installation. After I managed to install it, I realized I even have to go through the 'run as administrator' shortcut every time I have to execute the application. I even edited the properties of the shortcut to check 'always run as administrator' but nothing changed. The message I get when I'm trying to launch Chrome is "This program is blocked by group policy. For more information contact your system administrator"... Is it something I could work out alone or I have to convince them to change the " policy " ?

    Read the article

  • What would be a good topic for research on "edge of multiple processors / computers programming" topic?

    - by Kabumbus
    This is a subjective discussion so we can express our dreams and hopes here. A "topic" must be like a task with point to have as end result a software poduct. A "topic" must be mainly about "Software engineering", "Algorithm and data structure concepts" and perhaps "Design patterns". I mean let us try to look what is not already there? What can be developed in fiew month and give a breakthrue / start a new leap / show somethig not realized before in science of f multiple computers programming? What i see is already there: LAN / wire and other infrastractural programms for connecting on device level MPI/ Bit torrent/Jabber protocols / APIs / servers for messaging on top Boost and analogs on evry OS in most languages for multithreading there are lots of CUDA like on computer frameworks for fast calculating on computers GPUs What I personally do not see out there is a crossplatform framework for multiple processes interaction. Meaning one that would allow easy creation of multyple processes running in paralell inside one hoster app on one machine. In level not harder than needed for threads creation (so no seprate server apps - just one lib doing it all) Is there ny such lib and what can you propose for research topic?

    Read the article

  • how to properly Install chromium from zip and make it the default browser

    - by ClarifyLinux
    Since the Chromium PPA is no longer maintained, for those of us preferring to use chromium over chrome, we have two options: Build and Install from Source Download either 'beta' or daily builds (in a zip file) Unfortunately for me, option 1 is overly complicated. I know how to compile most any other applications in Ubuntu but I've never been able to get chromium to build correctly. I am currently using option 2. In Chromium I have the Chromium Updater installed (http://goo.gl/ffAMy). This gives me quick access to the most recent 64bit versions. Once downloaded, I install to /home/myuser/opt/chrome-linux. From this directory I can run the chrome binary. It works perfectly except for the fact that I cannot get it to act as my default browser. I've tried, as root, installing the binary in /opt/chrome-linux/ with a symbolic link to the 'chrome' binary in /usr/bin. Unfortunately, this doesn't work as a non root user. So my question is - How do I properly install a downloaded chromium zip build so tht it's listed as an option for the default browser?

    Read the article

  • How to tell your boss that he's a bad programmer? [closed]

    - by Doe
    Possible Duplicate: How to tell your boss that his programming style is really bad? There was a question about the boss having a bad programming style (weird booleans, empty loops, etc.) Having a bad/weird style does not imply being a bad programmer, but my situation is different. My boss outputs some really nasty code for the project, on which we are working together (just two of us). Examples: functions that span over several screens (big screens - 1900 x 1200) Deeply nested Conditional and Loop statements (up to 10 levels!!) Too much static variables, singletons, and both (singleton class with all the methods and members also static) Sometimes the code committed to the version control system does not even compile! Copy-Paste code instead of separating it into an independent function. Fail all the deadlines. "This's [C#|Java|Python] it shouldn't be efficient, that's why we loop all over the haystack to find the needle." "This's C/C++, it's fast enough to loop all over the haystack to find the needle." There is much more to mention... But the worst is that I have to redo much of the stuff he does, my code, which I try to keep clean is often polluted with above-mentioned atrocities. He's reaching 30 soon, so all his skills are established, and I don't even know if it's possible to change something. I like the project, but sometimes I just want to quit...

    Read the article

  • Live Webcast: Make Better, Faster Decisions using Visualization - December 18th

    - by Melissa Centurio Lopes
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Register today for Oracle’s Primavera upcoming Live Webcast: Make Better, Faster Decisions using Visualization, December 18th at 12pm ET. Join this webcast and discover how Oracle’s AutoVue enhances Primavera solutions with visualization of project documents, enabling users to view and digitally collaborate, improving decision making and project execution. Don’t miss this live webcast: register today and learn how you can increase visibility, improve productivity and leverage existing infrastructures with Primavera and AutoVue.

    Read the article

  • Friday tips #2

    - by Chris Kawalek
    Welcome to our second Friday tips blog! You can ask us questions using the hash tag #AskOracleVirtualization on Twitter and we'll do our best to answer them. Today we've got a VDI related question on linked clones: Question: I want to use linked clones with Oracle Virtual Desktop Infrastructure. What are my options? Answer by John Renko, Consulting Developer, Oracle: First, linked clones are available with the Oracle VirtualBox hypervisor only. Second, your choice of storage will affect the rest of your architecture. If you are using a SAN presenting ISCSI LUNS, you can have linked clones with a Oracle Enterprise Linux based hypervisor running VirtualBox. OEL will use OCFS2 to allow VirtualBox to create the linked clones. Because of the OCFS2 requirement, a Solaris based VirtualBox hypervisor will not be able to support linked clones on remote ISCSI storage. If you using the local storage option on your hypervisors, you will have linked clones with Solaris or Linux based hypervisors running VirtualBox. In all cases, Oracle Virtual Desktop Infrastructure makes the right selection for creating clones - sparse or linked - behind the scenes. Plan your architecture accordingly if you want to ensure you have the higher performing linked clones.

    Read the article

  • Intermediate SSL Certificates on Azure Websites

    - by amhed
    I have successfully configured an Extended-Validation Certificate on an Azure Website following this article: http://www.windowsazure.com/en-us/documentation/articles/web-sites-configure-ssl-certificate/ The main (non-technical) stakeholder of the web application went through great lengths to validate that our site is secure. He went to this site to check the validity of our SSL: http://www.whynopadlock.com/ The site throw the following error: `SSL verification issue (Possibly mis-matched URL or bad intermediate cert.). Details: ERROR: no certificate subject alternative name matches`` The certificate is installed using IP Based SSL instead of SNI. This is done this way because some site visitors still use Internet Explorer 8 on Windows XP, which has no support for SNI and throws a security warning. Is my certificate correclty installed? I received three .CRT files from my SSL provider: PrimaryIntermediate.crt SecondaryIntermediate.crt EndCertificate.crt This is how I exported our certificate as a .PFX file to Azure: openssl pkcs12 -export -out myserver.pfx -inkey myserver.key -in myserver.crt

    Read the article

< Previous Page | 305 306 307 308 309 310 311 312 313 314 315 316  | Next Page >