Search Results

Search found 7009 results on 281 pages for 'dynamic recompile'.

Page 133/281 | < Previous Page | 129 130 131 132 133 134 135 136 137 138 139 140  | Next Page >

  • Setup shared internet connection on virtualbox with fixed IP

    - by Tom
    I am a web developer and until recently I have been using ubuntu as my OS. For many reasons, I have switched back to windows. I still want to keep my server on linux platform, so I setup my local server as a virtual machine. Everything works great, but i have a little struggle with the networking. Since I am working in different places and going around clients, I connect to all sorts of network with different settings. That means the possible IP range is very dynamic which causes issues when I work on my local server. At the moment I have a dynamic IP on my host and static IP on my guest. That way I can access the server from my host (by adding record to hosts file). I also have internet connection on the guest. But once i change networks, it does not work (assuming the network has different configuration). My question is, how to setup host-guest networking, so no matter what network I connect to, I can keep my static IP on guest, which is registered in hosts file on my host so I can access the webserver and also I will have internet connection on the guest? Hope it make sense. Thank you

    Read the article

  • Hyper-V and attaching physical disks [migrated]

    - by Mike Christiansen
    So, I'm looking at rebuilding my home server. My current setup is the following Windows 7 Ultimate 1TB Boot Drive (my smallest drive) Windows Dynamic Spanned volume, continaing 1x 1TB drive, 2x 2TB drives, totalling 5TB. I am upgrading to a hardware RAID controller, and I would like to run Hyper-V server core. However, I want to retain the ability to join my "file server" to a homegroup, so I must use Windows 7. I know VHDs can only be like 127GB or something, so I obviously need to directly connect disks to my Windows 7 machine. Here is my plan: Server Core 2008 R2 (Hyper-V) 1TB Boot Drive (storing VHDs for boot drives of VMs) - possibly in a RAID 1 with my other 1TB drive 5x 2TB drives (1x 2TB drive hot spare), totalling 10TB, directly attached to a Windows 7 VM, for use of homegroup for this array. In the past, I directly attached the windows dynamic volume to a Windows 7 VM, and performance was abysmal. The question is, with hardware RAID, will it really make that much of a difference? Server specs: Intel Core 2 Quad Q9550 2.83GHz Asus Maximus II Formula (PCI-E x16) 8GB DDR2 RAM PC2-6400 (Yes, I know its a bit out of date)

    Read the article

  • Cannot make bind9 forward DNS query to subdomain unless recursive enabled

    - by PP.
    I am trying to develop my own dynamic DNS. I'm running my own custom DNS for the subdomain on port 5353. ASCII diagram: INET --->:53 Bind 9 --->:5353 node.js | V zone_files I have example.com. The node.js DNS is for dyn.example.com. In my /etc/bind/named.conf.local I have: zone "example.com" { type master; file "/etc/bind/db.com.example"; allow-transfer { zonetxfrsafe; }; }; zone "dyn.example.com" IN { # DYNAMIC type forward; forwarders { 127.0.0.1 port 5353; }; forward only; }; I've even gone so far as to add a NS in my example.com zone file: $TTL 86400 @ IN SOA ns.example.com. hostmaster.example.com. ( 2013070104 ; Serial 7200 ; Refresh 1200 ; Retry 2419200 ; Expire 86400 ) ; Negative Cache TTL ; NS ns ; inet of our nameserver ns A 1.2.3.4 ; NS record for subdomain dyn NS ns When I attempt to get a record from the subdomain server it doesn't get forwarded: dig @127.0.0.1 test.dyn.example.com However if I turn recursive on in /etc/bind/named.conf.options: options { recursion yes; } .. then I CAN see the request going to the subdomain server. But I don't want recursion yes; in my Bind configuration as it is poor security practice (and allows all-and-sundry requests that are not related to my managed zones). How does one forward (proxy) zone queries for just one zone? Or do I give up on Bind altogether and find a DNS server that can actually forward specific queries?

    Read the article

  • Clean Install: Ubuntu 8.04, 10.04 or somthing else?

    - by Robert S. Barnes
    I'm faced with a pending clean install due to a dying hard drive in my old Dell Inspiron 8600 laptop. I've been running Ubuntu 8.04 on it since... 8.04 and I've been more or less happy with it except that it's a PITA to recompile the kernel or do any other kernel related work. I mostly do software dev on it, gcc, gvim, c/c++/perl/php/mysql and running vmware server 2.0. I've heard mixed reviews of 10.04, and am wondering what to put on the new HD. I'm even considering just sticking with 8.04 as it seems to mostly meet my needs. Any suggestions?

    Read the article

  • How expensive is a hostname in htaccess? Other solutions possible?

    - by Nanne
    For easy allow or disallowing of dynamic IP-adresses you can add them as a hostname in a .htaccess file. As I have read from: .htaccess allow from hostname? it does a reverse lookup on the connecting ip address, seeing if the response matches the allowed name. (Well, actually Apache is doing a double lookup, first a reverse lookup and then a forward lookup on the result of the reverse.) This is the reason we are currently not using dynamic-ip hostnames in the .htaccess: this "sounds" quite heavy: 2 extra lookups for every request. Is this indeed quite heavy, and would a reasonably busy server that is rather looking for less then more load get away with this :)? (e.g.: how does this 'load' compare to the rest? If a request is 1000 times more expensive then the lookups it might be negligible. otoh, it could be that final straw :) ) Are there other solutions? I can write a script that does a lookup of the hostname and put it in .htaccess files ofcourse, but this feels a bit like a hack.

    Read the article

  • Help: My SSL Certificate expired but I can't renew this weekend, is there a way I can disable SSL through IIS?

    - by shogun
    Or perhaps force redirect to HTTP when they request HTTPS? I tried removing the SSL port settings under 'Advanced' but it broke the login page. I don't want to do a deploy/recompile right now but I am able to edit the VIEWS. However I am thinking there may be a way for IIS to do this, but then again I think the .NET code tries to force SSL. Crap. This is bull crap because I was bugging people about getting the new certificate since December and they were like oh we will take care of it... I think it may have even gotten to the point where they were getting annoyed with me hassling them about it! (rant over.) And yes, the key is that more support tickets will be caused by the browser giving a security error than if SSL was removed entirely for one day. Becaues they wouldn't very very likely not notice it being gone.

    Read the article

  • Visual Studio 2010 locking referenced Assembly

    - by cunningdave
    I have a sandbox app that is built from the simple WPF Application template. This sandbox references an assembly that I am also building which contains the definition of a UserControl (WPF). I am instantiating this user control in the sandbox, to test the control's behaviour. The point of all this is to speed up development. This worked fine, but recently the .Vshost.exe paired with the sandbox process won't shut down. This prevents me from recompiling the Controls library, though ironically I can recompile the sandbox application. I can't kill the vshost process with Task Manager... only restarting VS2010 will clear it out. But every time I run the application from VS, the process just hangs there, blocking my workflow. I'm at a loss. Any ideas what could be causing this? Or does someone have any proposed workaround (mega-kill switch, perhaps?)

    Read the article

  • Slow speed for UFS mounted drive in Linux

    - by Incredible
    Hi, I have a disk that has Sun OS disk, (ufs filesystem). And I want to mount it in my debain machine with read/write mode. Since by deafult linux doesn't support write to ufs filesystem. I had to recompile the kernel by setting to the flag CONFIG_UFS_FS_WRITE=y. Now I am able to write to the filesystem, but the read/write speed is very slow. It is around 120 KB/s. Any idea what is wrong and how to resolve this issue? Thank you in advance.

    Read the article

  • Heaps of Trouble?

    - by Paul White NZ
    If you’re not already a regular reader of Brad Schulz’s blog, you’re missing out on some great material.  In his latest entry, he is tasked with optimizing a query run against tables that have no indexes at all.  The problem is, predictably, that performance is not very good.  The catch is that we are not allowed to create any indexes (or even new statistics) as part of our optimization efforts. In this post, I’m going to look at the problem from a slightly different angle, and present an alternative solution to the one Brad found.  Inevitably, there’s going to be some overlap between our entries, and while you don’t necessarily need to read Brad’s post before this one, I do strongly recommend that you read it at some stage; he covers some important points that I won’t cover again here. The Example We’ll use data from the AdventureWorks database, copied to temporary unindexed tables.  A script to create these structures is shown below: CREATE TABLE #Custs ( CustomerID INTEGER NOT NULL, TerritoryID INTEGER NULL, CustomerType NCHAR(1) COLLATE SQL_Latin1_General_CP1_CI_AI NOT NULL, ); GO CREATE TABLE #Prods ( ProductMainID INTEGER NOT NULL, ProductSubID INTEGER NOT NULL, ProductSubSubID INTEGER NOT NULL, Name NVARCHAR(50) COLLATE SQL_Latin1_General_CP1_CI_AI NOT NULL, ); GO CREATE TABLE #OrdHeader ( SalesOrderID INTEGER NOT NULL, OrderDate DATETIME NOT NULL, SalesOrderNumber NVARCHAR(25) COLLATE SQL_Latin1_General_CP1_CI_AI NOT NULL, CustomerID INTEGER NOT NULL, ); GO CREATE TABLE #OrdDetail ( SalesOrderID INTEGER NOT NULL, OrderQty SMALLINT NOT NULL, LineTotal NUMERIC(38,6) NOT NULL, ProductMainID INTEGER NOT NULL, ProductSubID INTEGER NOT NULL, ProductSubSubID INTEGER NOT NULL, ); GO INSERT #Custs ( CustomerID, TerritoryID, CustomerType ) SELECT C.CustomerID, C.TerritoryID, C.CustomerType FROM AdventureWorks.Sales.Customer C WITH (TABLOCK); GO INSERT #Prods ( ProductMainID, ProductSubID, ProductSubSubID, Name ) SELECT P.ProductID, P.ProductID, P.ProductID, P.Name FROM AdventureWorks.Production.Product P WITH (TABLOCK); GO INSERT #OrdHeader ( SalesOrderID, OrderDate, SalesOrderNumber, CustomerID ) SELECT H.SalesOrderID, H.OrderDate, H.SalesOrderNumber, H.CustomerID FROM AdventureWorks.Sales.SalesOrderHeader H WITH (TABLOCK); GO INSERT #OrdDetail ( SalesOrderID, OrderQty, LineTotal, ProductMainID, ProductSubID, ProductSubSubID ) SELECT D.SalesOrderID, D.OrderQty, D.LineTotal, D.ProductID, D.ProductID, D.ProductID FROM AdventureWorks.Sales.SalesOrderDetail D WITH (TABLOCK); The query itself is a simple join of the four tables: SELECT P.ProductMainID AS PID, P.Name, D.OrderQty, H.SalesOrderNumber, H.OrderDate, C.TerritoryID FROM #Prods P JOIN #OrdDetail D ON P.ProductMainID = D.ProductMainID AND P.ProductSubID = D.ProductSubID AND P.ProductSubSubID = D.ProductSubSubID JOIN #OrdHeader H ON D.SalesOrderID = H.SalesOrderID JOIN #Custs C ON H.CustomerID = C.CustomerID ORDER BY P.ProductMainID ASC OPTION (RECOMPILE, MAXDOP 1); Remember that these tables have no indexes at all, and only the single-column sampled statistics SQL Server automatically creates (assuming default settings).  The estimated query plan produced for the test query looks like this (click to enlarge): The Problem The problem here is one of cardinality estimation – the number of rows SQL Server expects to find at each step of the plan.  The lack of indexes and useful statistical information means that SQL Server does not have the information it needs to make a good estimate.  Every join in the plan shown above estimates that it will produce just a single row as output.  Brad covers the factors that lead to the low estimates in his post. In reality, the join between the #Prods and #OrdDetail tables will produce 121,317 rows.  It should not surprise you that this has rather dire consequences for the remainder of the query plan.  In particular, it makes a nonsense of the optimizer’s decision to use Nested Loops to join to the two remaining tables.  Instead of scanning the #OrdHeader and #Custs tables once (as it expected), it has to perform 121,317 full scans of each.  The query takes somewhere in the region of twenty minutes to run to completion on my development machine. A Solution At this point, you may be thinking the same thing I was: if we really are stuck with no indexes, the best we can do is to use hash joins everywhere. We can force the exclusive use of hash joins in several ways, the two most common being join and query hints.  A join hint means writing the query using the INNER HASH JOIN syntax; using a query hint involves adding OPTION (HASH JOIN) at the bottom of the query.  The difference is that using join hints also forces the order of the join, whereas the query hint gives the optimizer freedom to reorder the joins at its discretion. Adding the OPTION (HASH JOIN) hint results in this estimated plan: That produces the correct output in around seven seconds, which is quite an improvement!  As a purely practical matter, and given the rigid rules of the environment we find ourselves in, we might leave things there.  (We can improve the hashing solution a bit – I’ll come back to that later on). Faster Nested Loops It might surprise you to hear that we can beat the performance of the hash join solution shown above using nested loops joins exclusively, and without breaking the rules we have been set. The key to this part is to realize that a condition like (A = B) can be expressed as (A <= B) AND (A >= B).  Armed with this tremendous new insight, we can rewrite the join predicates like so: SELECT P.ProductMainID AS PID, P.Name, D.OrderQty, H.SalesOrderNumber, H.OrderDate, C.TerritoryID FROM #OrdDetail D JOIN #OrdHeader H ON D.SalesOrderID >= H.SalesOrderID AND D.SalesOrderID <= H.SalesOrderID JOIN #Custs C ON H.CustomerID >= C.CustomerID AND H.CustomerID <= C.CustomerID JOIN #Prods P ON P.ProductMainID >= D.ProductMainID AND P.ProductMainID <= D.ProductMainID AND P.ProductSubID = D.ProductSubID AND P.ProductSubSubID = D.ProductSubSubID ORDER BY D.ProductMainID OPTION (RECOMPILE, LOOP JOIN, MAXDOP 1, FORCE ORDER); I’ve also added LOOP JOIN and FORCE ORDER query hints to ensure that only nested loops joins are used, and that the tables are joined in the order they appear.  The new estimated execution plan is: This new query runs in under 2 seconds. Why Is It Faster? The main reason for the improvement is the appearance of the eager Index Spools, which are also known as index-on-the-fly spools.  If you read my Inside The Optimiser series you might be interested to know that the rule responsible is called JoinToIndexOnTheFly. An eager index spool consumes all rows from the table it sits above, and builds a index suitable for the join to seek on.  Taking the index spool above the #Custs table as an example, it reads all the CustomerID and TerritoryID values with a single scan of the table, and builds an index keyed on CustomerID.  The term ‘eager’ means that the spool consumes all of its input rows when it starts up.  The index is built in a work table in tempdb, has no associated statistics, and only exists until the query finishes executing. The result is that each unindexed table is only scanned once, and just for the columns necessary to build the temporary index.  From that point on, every execution of the inner side of the join is answered by a seek on the temporary index – not the base table. A second optimization is that the sort on ProductMainID (required by the ORDER BY clause) is performed early, on just the rows coming from the #OrdDetail table.  The optimizer has a good estimate for the number of rows it needs to sort at that stage – it is just the cardinality of the table itself.  The accuracy of the estimate there is important because it helps determine the memory grant given to the sort operation.  Nested loops join preserves the order of rows on its outer input, so sorting early is safe.  (Hash joins do not preserve order in this way, of course). The extra lazy spool on the #Prods branch is a further optimization that avoids executing the seek on the temporary index if the value being joined (the ‘outer reference’) hasn’t changed from the last row received on the outer input.  It takes advantage of the fact that rows are still sorted on ProductMainID, so if duplicates exist, they will arrive at the join operator one after the other. The optimizer is quite conservative about introducing index spools into a plan, because creating and dropping a temporary index is a relatively expensive operation.  It’s presence in a plan is often an indication that a useful index is missing. I want to stress that I rewrote the query in this way primarily as an educational exercise – I can’t imagine having to do something so horrible to a production system. Improving the Hash Join I promised I would return to the solution that uses hash joins.  You might be puzzled that SQL Server can create three new indexes (and perform all those nested loops iterations) faster than it can perform three hash joins.  The answer, again, is down to the poor information available to the optimizer.  Let’s look at the hash join plan again: Two of the hash joins have single-row estimates on their build inputs.  SQL Server fixes the amount of memory available for the hash table based on this cardinality estimate, so at run time the hash join very quickly runs out of memory. This results in the join spilling hash buckets to disk, and any rows from the probe input that hash to the spilled buckets also get written to disk.  The join process then continues, and may again run out of memory.  This is a recursive process, which may eventually result in SQL Server resorting to a bailout join algorithm, which is guaranteed to complete eventually, but may be very slow.  The data sizes in the example tables are not large enough to force a hash bailout, but it does result in multiple levels of hash recursion.  You can see this for yourself by tracing the Hash Warning event using the Profiler tool. The final sort in the plan also suffers from a similar problem: it receives very little memory and has to perform multiple sort passes, saving intermediate runs to disk (the Sort Warnings Profiler event can be used to confirm this).  Notice also that because hash joins don’t preserve sort order, the sort cannot be pushed down the plan toward the #OrdDetail table, as in the nested loops plan. Ok, so now we understand the problems, what can we do to fix it?  We can address the hash spilling by forcing a different order for the joins: SELECT P.ProductMainID AS PID, P.Name, D.OrderQty, H.SalesOrderNumber, H.OrderDate, C.TerritoryID FROM #Prods P JOIN #Custs C JOIN #OrdHeader H ON H.CustomerID = C.CustomerID JOIN #OrdDetail D ON D.SalesOrderID = H.SalesOrderID ON P.ProductMainID = D.ProductMainID AND P.ProductSubID = D.ProductSubID AND P.ProductSubSubID = D.ProductSubSubID ORDER BY D.ProductMainID OPTION (MAXDOP 1, HASH JOIN, FORCE ORDER); With this plan, each of the inputs to the hash joins has a good estimate, and no hash recursion occurs.  The final sort still suffers from the one-row estimate problem, and we get a single-pass sort warning as it writes rows to disk.  Even so, the query runs to completion in three or four seconds.  That’s around half the time of the previous hashing solution, but still not as fast as the nested loops trickery. Final Thoughts SQL Server’s optimizer makes cost-based decisions, so it is vital to provide it with accurate information.  We can’t really blame the performance problems highlighted here on anything other than the decision to use completely unindexed tables, and not to allow the creation of additional statistics. I should probably stress that the nested loops solution shown above is not one I would normally contemplate in the real world.  It’s there primarily for its educational and entertainment value.  I might perhaps use it to demonstrate to the sceptical that SQL Server itself is crying out for an index. Be sure to read Brad’s original post for more details.  My grateful thanks to him for granting permission to reuse some of his material. Paul White Email: [email protected] Twitter: @PaulWhiteNZ

    Read the article

  • Avoid Memory Leaks in SharePoint2010 Development

    - by ybbest
    When you develop SharePoint solution using code, you need to Dispose SPWeb appropriately to avoid memory Leaks. The general guideline for this are: Dispose Not to dispose OpenWebEnumerating Webs or AllWebs ParentWebRootWeb SPWeb from SPContext There are more rules than the one list above and as a smart SharePoint developer, you do not have to memories all the rules .There is a tool called SharePoint Dispose Checker which can help you to find potential memory leak. To use SPDisposeChecker in you solution, you need to download the tool from MSDN Code Gallery and install it in your development machine as follow. 1. Run the installer with elevated privilege. 2. Accept the agreement and click next. 3. Select those two options and click next. 4. Select Everyone and click Next. 5. Go to Toolsà SharePoint Dispose Check to Configure the SPDisposeCheck. 6. You can change the Treat problems as Errors to Warnings. 7. after clicking Save, you are all set to use the tool.Recompile my project , I can get the result below. References: SharePoint 2007/2010 “Do Not Dispose Guidance” + SPDisposeCheck

    Read the article

  • Fix invalid objects and components - BEFORE you upgrade!

    - by Mike Dietrich
    We are currently running a Tech Challange Workshop with 25 Oracle consultants and support folks from all over EMEA. We call it Tech Challange because we seperate these experts having between 5 and 20 years of Oracle experience into 5 groups - and each group has to complete their special challange such as moving a database from 10.2 to Exadata V2 or upgrading from single instance 10.2 to Real Application Clusters 11.2 with the new Grid Infrastructure. Actually we start this training with a bit presentation pieces about upgrades, Real Application Testing and Golden Gate. And one topic I always point out: Keep your database tidy before the upgrade!!! Clean up all invalid objects - especially in SYS and SYSTEM user schema BEFORE you upgrade. Use utlrp.sql to recompile invalid objects. Use Note:753041.1 to diagnose and fix invalid components. Do this always BEFORE you start the upgrade. Even if it may take some time. Otherwise your upgrade could fails or significant parts of the database packages could be invalid after the upgrade as well. I just came across this today as one group had ~240 invalid objects in the database - and due to the fact that the original system was still there could proof that the objects had been invalid before. Good job, BUT ... :-)

    Read the article

  • Unable to Install VirtualBox Due to Missing Kernel Module

    - by SoftTimur
    I am trying to install VirtualBox on my Ubuntu. I first tried to sudo apt-get install virtualbox-ose in a terminal, but after the configuration step, it fails with an error: No suitable module for running kernel found When proceeding with starting virtualbox, I get this error: WARNING: The character device /dev/vboxdrv does not exist. Please install the virtualbox-ose-dkms package and the appropriate headers, most likely linux-headers-generic. You will not be able to start VMs until this problem is fixed. So I tried the package from http://www.virtualbox.org/, but starting VirtualBox fails with: WARNING: The vboxdrv kernel module is not loaded. Either there is no module available for the current kernel (2.6.38-8-generic-pae) or it failed to load. Please recompile the kernel module and install it by sudo /etc/init.d/vboxdrv setup You will not be able to start VMs until this problem is fixed. So I ran sudo /etc/init.d/vboxdrv setup, but it fails too: * Stopping VirtualBox kernel modules [ OK ] * Uninstalling old VirtualBox DKMS kernel modules [ OK ] * Trying to register the VirtualBox kernel modules using DKMS Error! Your kernel headers for kernel 2.6.38-8-generic-pae cannot be found at /lib/modules/2.6.38-8-generic-pae/build or /lib/modules/2.6.38-8-generic-pae/source. * Failed, trying without DKMS * Recompiling VirtualBox kernel modules * Look at /var/log/vbox-install.log to find out what went wrong The contents of /var/log/vbox-install.log. As I am stuck, I also tried to install kernel-devel with yum, still fruitless: root@ubuntu# yum install kernel-devel Setting up Install Process No package kernel-devel available. Nothing to do Now I've no idea how to correct this. Any ideas?

    Read the article

  • #mvvmlight V4 update for Win8 RTM

    - by Laurent Bugnion
    With Windows 8 RTM out of the doors (at least for some of us), it was also time to create an update to MVVM Light. I selected the V4 RTM to do this (V4.0.23).This RTM version was released a few weeks ago with no much bells and whistles because I was just too busy to write much about it. Now after some vacation, I will resume blogging on all my favorite topics including of course MVVM Light. Upgrade Upgrading the installations should not require an ununistall, so try to simply run the MSI downloaded from the Download section at http://mvvmlight.codeplex.com/. Should you encounter any issue, try to uninstall the old version first following the steps at http://galasoft.ch/mvvm/cleaning/. Upgrading current apps from Win8 RP to Win8 RTM I didn’t recompile the assemblies of MVVM Light, so if you had a version running with V4.0.23 on Windows 8 RP, you should be able to use the same DLLs on Windows 8 RTM. If you were using earlier versions however, I would recommend doing an upgrade. I noticed a warning regarding the signing certificate. It is due to the PFX key which appears to be outdated after the upgrade to Windows 8 RTM. I solved this warning by replacing the old PFX key with a new one I copied from a new project. The warning did not cause the build to fail though. About MVVM Light V4 RTM The RTM is finalizing quite a lot of issues. The change log is available at http://galasoft.ch/mvvm/installing/changes/. There are some issues that are either still open or that popped up since then, and I am working on V4.1 to be released in the next few weeks. In addition to that, I have plans to support Windows Phone 8 (when it comes) and have a nice list of ideas for V5 with a few new components. Thanks again for your continuous support of MVVM Light! Laurent   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • What are the packages/libraries I should install before compiling Python from source?

    - by Lennart Regebro
    Once in a while I need to install a new Ubuntu (I used it both for desktop and servers) and I always forget a couple of libraries I should have installed before compiling, meaning I have to recompile, and it's getting annoying. So now I want to make a complete list of all library packages to install before compiling Python (and preferably how optional they are). This is the list I compiled with below help and by digging in setup.py. It is complete for Ubuntu 10.04 and 11.04 at least: build-essential (obviously) libz-dev (also pretty common and essential) libreadline-dev (or the Python prompt is crap) libncursesw5-dev libssl-dev libgdbm-dev libsqlite3-dev libbz2-dev More optional: tk-dev libdb-dev Ubuntu has no packages for v1.8.5 of the Berkeley database, nor (for obvious reasons) the Sun audio hardware, so the bsddb185 and sunaudiodev modules will still not be built on Ubuntu, but all other modules are built with the above packages installed. Python 2.5 and Python 2.6 also needs to have LDFLAGS set on Ubuntu 11.04 and later, to handle the new multi-arch layout: export LDFLAGS="-L/usr/lib/$(dpkg-architecture -qDEB_HOST_MULTIARCH)" For Python 2.6 and 2.7 you also need to explicitly enable SSL after running the ./configure script and before running make. In Modules/Setup there are lines like this: #SSL=/usr/local/ssl #_ssl _ssl.c \ # -DUSE_SSL -I$(SSL)/include -I$(SSL)/include/openssl \ # -L$(SSL)/lib -lssl -lcrypto Uncomment these lines and change the SSL variable to /usr: SSL=/usr _ssl _ssl.c \ -DUSE_SSL -I$(SSL)/include -I$(SSL)/include/openssl \ -L$(SSL)/lib -lssl -lcrypto Python 2.6 also needs Modules/_ssl.c modified to be used with OpenSSL 1.0, which is used in Ubuntu 11.10. At around line 300 you'll find this: else if (proto_version == PY_SSL_VERSION_SSL3) self->ctx = SSL_CTX_new(SSLv3_method()); /* Set up context */ else if (proto_version == PY_SSL_VERSION_SSL2) self->ctx = SSL_CTX_new(SSLv2_method()); /* Set up context */ else if (proto_version == PY_SSL_VERSION_SSL23) self->ctx = SSL_CTX_new(SSLv23_method()); /* Set up context */ Change that into: else if (proto_version == PY_SSL_VERSION_SSL3) self->ctx = SSL_CTX_new(SSLv3_method()); /* Set up context */ #ifndef OPENSSL_NO_SSL2 else if (proto_version == PY_SSL_VERSION_SSL2) self->ctx = SSL_CTX_new(SSLv2_method()); /* Set up context */ #endif else if (proto_version == PY_SSL_VERSION_SSL23) self->ctx = SSL_CTX_new(SSLv23_method()); /* Set up context */ This disables SSL_v2 support, which apparently is gone in OpenSSL1.0.

    Read the article

  • .NET Demon 1.0 Released

    - by theo.spears
    Today we're officially releasing version 1.0 of .NET Demon, the Visual Studio Extension Alex Davies and I have been working on for the last 6 months. There have been beta versions available for a while, but we have now released the first "official" version and made it available to purchase. If you haven't yet tried the tool, it's all about reducing the time between when you write a line of code and when you are able to try it out so you don't have to wait: Continuous compilation We use spare CPU cycles on your machine to compile your code in the background when you make changes, so assemblies are up to date whenever you want to run them. Some clever logic means we only recompile code which may have been affected by your changes. Continuous save .NET Demon can perform background saving, so you don't lose any work in case of crashes or power failures, and are less likely to forget to commit changed files. Continuous testing (Experimental) The testing tool in .NET Demon watches which code you change in your solution, and automatically reruns tests which are impacted, so you learn about any breaking changes as quickly as possible. It also gives you inline test coverage information inside Visual Studio. Continuous testing is still experimental - it will work fine in many cases, but we know it's not yet perfect. Releasing version 1.0 doesn't mean we're pausing development or pushing out improvements. We will still be regularly providing new versions with improved functionality and fixes for any bugs people come across. Visit the .NET Demon product page to download

    Read the article

  • Qt, MSVC, and /Zc:wchar_t- == I want to blow up the world

    - by Noah Roberts
    So Qt is compiled with /Zc:wchar_t- on windows. What this means is that instead of wchar_t being a typedef for some internal type (__wchar_t I think) it becomes a typedef for unsigned short. The really cool thing about this is that the default for MSVC is the opposite, which of course means that the libraries you're using are likely compiled with wchar_t being a different type than Qt's wchar_t. This doesn't become an issue of course until you try to use something like std::wstring in your code; especially when one or more libraries have functions that accept it as parameters. What effectively happens is that your code happily compiles but then fails to link because it's looking for definitions using std::wstring<unsigned short...> but they only contain definitions expecting std::wstring<__wchar_t...> (or whatever). So I did some web searching and ran into this link: http://bugreports.qt.nokia.com/browse/QTBUG-6345 Based on the statement by Thiago Macieira, "Sorry, we will not support building Qt like this," I've been worried that fixing Qt to work like everything else might cause some problem and have been trying to avoid it. We recompiled all of our support libraries with the /Zc:wchar_t- flag and have been fairly content with that until a couple days ago when we started trying to port over (we're in the process of switching from Wx to Qt) some serialization code. Because of how win32 works, and because Wx just wraps win32, we've been using std::wstring to represent string data with the intent of making our product as i18n ready as possible. We did some testing and Wx did not work with multibyte characters when trying to print special stuff (even not so special stuff like the degree symbol was an issue). I'm not so sure that Qt has this problem since QString isn't just a wrapper to the underlying _TCHAR type but is a Unicode monster of some sort. At any rate, the serialization library in boost has compiled parts. We've attempted to recompile boost with /Zc:wchar_t- but so far our attempts to tell bjam to do this have gone unheeded. We're at an impasse. From where I'm sitting I have three options: Recompile Qt and hope it works with /Zc:wchar_t. There's some evidence around the web that others have done this but I have no way of predicting what will happen. All attempts to ask Qt people on forums and such have gone unanswered. Hell, even in that very bug report someone asks why and it just sat there for a year. Keep fighting with bjam until it listens. Right now I've got someone under me doing that and I have more experience fighting with things to get what I want but I do have to admit to getting rather tired of it. I'm also concerned that I'll KEEP running into this issue just because Qt wants to be a c**t. Stop using wchar_t for anything. Unfortunately my i18n experience is pretty much 0 but it seems to me that I just need to find the right to/from function in QString (it has a BUNCH) to encode the Unicode into 8-bytes and visa-versa. UTF8 functions look promising but I really want to be sure that no data will be lost if someone from Zimbabfuckegypt starts writing in their own language and the documentation in QString frightens me a little into thinking that could happen. Of course, I could always run into some library that insists I use wchar_t and then I'm back to 1 or 2 but I rather doubt that would happen. So, what's my question... Which of these options is my best bet? Is Qt going to eventually cause me to gouge out my own eyes because I decided to compile it with /Zc:wchar_t anyway? What's the magic incantation to get boost to build with /Zc:wchar_t- and will THAT cause permanent mental damage? Can I get away with just using the standard 8-bit (well, 'common' anyway) character classes and be i18n compliant/ready? How do other Qt developers deal with this mess?

    Read the article

  • OWB 11gR2 &ndash; JDBC Helper Utility

    - by David Allan
    One of the common queries when importing the tables via JDBC with 11gR2 is determining why the import wizard doesn’t display the tables that you think it should. I often just use the script below to dump out the schemas, tables and columns that the JDBC driver is returning. This is useful in a few areas; to figure out what the schema name is returned to double check with the schema name you have used in the location (this is used in the DatabaseMetaData.getTables API call within the basic JDBC metadata import. to figure out the data types returned from the JDBC driver when you see columns skipped because of no datatype supported messages. also…I can do it via scripting and don’t need to recompile classes and stuff :-) Edit the tcl script and set the JDBC driver, the connection URL and the username and password (they are at the bottom of the script), the script then calls a basic tcl procedure which writes to standard out the schemas, tables and columns with various properties. For example I executed it using the XML JDBC driver from ODI over a simple customers XML file and it writes the following metadata; You can add more details as you need and execute from the OMBPlus panel within OWB. Download the sample tcl jdbc script here There is a bunch of really useful stuff on OTN documenting this area (start with the white paper here) that is worth checking out all related to the OWB SDK covering everything from platform definitions, custom metadata importers, application adapters, code templates etc. You can find a bunch of goodies on the OWB SDK here.

    Read the article

  • Nvidia API mismatch

    - by Oli
    I had planned a day of relaxing with Portal 2 but on starting Steam (for the first time in a couple of weeks) I was greeted with the following message in the terminal: Error: API mismatch: the NVIDIA kernel module has version 270.41.19, but this NVIDIA driver component has version 270.41.06. Please make sure that the kernel module and all NVIDIA driver components have the same version. I'll confess I don't really know what it's talking about when it says driver. The verion of nvidia-current is 270.41.19. I thought that was the driver and module, all in one. I use the X-SWAT PPA and I have noted that the nvidia-settings package has boosted to 275.09.07. As this is just a settings application, I don't think this mismatch has anything to do with this. It's also not the same version as the problem being described. I'd rather not purge back to the standard Nvidia driver as it's less than stable on my GTX580. I would accept an answer that takes the manual setup and makes it recompile when the kernel recompiles (ie, some DKMS wizardry) but it has to work. I don't want to drop back to text-mode every time I restart after a kernel upgrade. Edit: Minecraft works without a single complaint about driver versions. Penumbra dies with roughly the same error when entering a game.

    Read the article

  • How to make pulseaudio and ubuntu detect the same audio device as alsa driver

    - by Kiwy
    I use Ubuntu 14.04 x64 and I use gnome-shell on my laptop. I have a Bose companion 5 (which is basically a USB sound system) and a HDMI port, both does work perfectly when I just boot with the cable plugin. However, when my laptop go to sleep or get unplugged from those two outputs, if I plug back the device, I end up without any hardware detection (only the built-in speakers) from pulse and gnome-shell sound output selector while if I use alsamixer, the device look up and ready. gstreamer-properties allow me to select and test effectively any device but while alsa recognize any device on the run, pulse is not capable of handling things correctly, my question is then: How can I make pulse detect and use the same hardware as alsa, or how to remove completely and gracefully pulseaudio (meaning volume applet running in gnome shell) I don't mind if the project implies to recompile half gnome shell if it implies those audio outputs work all the time. Pulse does not list my soundcard when I use command pactl list cards while the module plug&play for sound card is loaded in pactl list modules. I really don't know what to do, the behavior seems pretty random.

    Read the article

  • Is dependency injection by hand a better alternative to composition and polymorphism?

    - by Drake Clarris
    First, I'm an entry level programmer; In fact, I'm finishing an A.S. degree with a final capstone project over the summer. In my new job, when there isn't some project for me to do (they're waiting to fill the team with more new hires), I've been given books to read and learn from while I wait - some textbooks, others not so much (like Code Complete). After going through these books, I've turned to the internet to learn as much as possible, and started learning about SOLID and DI (we talked some about Liskov's substitution principle, but not much else SOLID ideas). So as I've learned, I sat down to do to learn better, and began writing some code to utilize DI by hand (there are no DI frameworks on the development computers). Thing is, as I do it, I notice it feels familiar... and it seems like it is very much like work I've done in the past using composition of abstract classes using polymorphism. Am I missing a bigger picture here? Is there something about DI (at least by hand) that goes beyond that? I understand the possibility of having configurations not in code of some DI frameworks having some great benefits as far as changing things without having to recompile, but when doing it by hand, I'm not sure if it's any different than stated above... Some insight into this would be very helpful!

    Read the article

  • How do I trust an off site application

    - by Pieter
    I need to implement something similar to a license server. This will have to be installed off site at the customers' location and needs to communicate with other applications at the customers' site (the applications that use the licenses) and an application running in our hosting center (for reporting and getting license information). My question is how to set this up in a way I can trust that: The license server is really our application and not something that just simulates it; and There is no "man in the middle" (i.e. a proxy or something that alters the traffic). The first thing I thought of was to use with client certificates and that would solve at least 2. However, what I'm worried about is that someone just decompiles (this is build in .NET) the license server, alters some logic and recompiles it. This would be hard to detect from both connecting applications. This doesn't have to be absolutely secure since we have a limited number of customers whom we have a trust relationship with. However, I do want to make it more difficult than a simple decompile/recompile of the license server. I primarily want to protect against an employee or nephew of the boss trying to be smart.

    Read the article

  • Why does running this program on 11.10 give a 'GLIBC_2.15 not found' error?

    - by RafLance
    I am trying to install Absinthe 2.0.4 on Ubuntu 11.10 on a netbook. When I try to run the install file, this keeps on happening: rafael@RafLaptop:~/Desktop/absinthe-linux-2.0.4$ ./absinthe.x86 ./absinthe.x86: /lib/i386-linux-gnu/libc.so.6: version `GLIBC_2.15' not found (required by ./absinthe.x86) Do I need to upgrade GLIBC? If so, how do I do that? Since I'm on a netbook I can't use a LiveCD so I wanted to know if there was a way I could fix this issue without reinstalling my whole OS. Any explanations about what GLIBC is exactly would be great too since this is a learning experience for me. I know that GLIBC is a part of libc.so.6 and so I tried to run sudo apt-get install libc.so.6 but was told that it was up to date. But GLIBC isn't? I hope this articulates my problem well, if there are any pieces of missing info or questions to clarity my question, please let me know! ~-~ EDIT/UPDATE: So after some help on the AskUbuntu chat room from user izx, I have gathered the following information/understanding: -I need to run this program with Ubuntu 12.04 or recompile it from source -Upgrading libc on Oneiric to 2.15 while possible, is not an easy task and is not officially supported.

    Read the article

  • glibc not found when trying to install Absinthe?

    - by RafLance
    I am trying to install Absinthe 2.0.4 on Ubuntu 11.10 on a netbook. When I try to run the install file, this keeps on happening: rafael@RafLaptop:~/Desktop/absinthe-linux-2.0.4$ ./absinthe.x86 ./absinthe.x86: /lib/i386-linux-gnu/libc.so.6: version `GLIBC_2.15' not found (required by ./absinthe.x86) Do I need to upgrade GLIBC? If so, how do I do that? Since I'm on a netbook I can't use a LiveCD so I wanted to know if there was a way I could fix this issue without reinstalling my whole OS. Any explanations about what GLIBC is exactly would be great too since this is a learning experience for me. I know that GLIBC is a part of libc.so.6 and so I tried to run sudo apt-get install libc.so.6 but was told that it was up to date. But GLIBC isn't? I hope this articulates my problem well, if there are any pieces of missing info or questions to clarity my question, please let me know! ~-~ EDIT/UPDATE: So after some help on the AskUbuntu chat room from user izx, I have gathered the following information/understanding: -I need to run this program with Ubuntu 12.04 or recompile it from source -Upgrading libc on Oneiric to 2.15 while possible, is not an easy task and is not officially supported.

    Read the article

  • Recompiling an old fortran 2/4\66 program that was compiled for os\2 need it to run in dos

    - by Mike Hansen
    I am helping an old scientist with some problems and have 1 program that he found and modified about 20 yrs. ago, and runs fine as a 32 bit os\2 executable but i need it to run under dos! I am not a programmer but a good hardware & software man, so I'am pretty stupid about this problem, but here go's I have downloaded 6 different compilers watcom77,silverfrost ftn95,gfortran,2 versions of g77 and f80. Watcom says it is to old of program,find older compiler,silverfrost opens it,debugs, etc. but is changing all the subroutines from "real" to "complex" and vice-vesa,and the g77's seem to install perfectly (library links and etc.) but wont even compile the test.f programs.My problem is 1; to recompile "as is" or "upgrade" the code? PROGRAM xconvlv INTEGER N,N2,M PARAMETER (N=2048,N2=2048,M=128) INTEGER i,isign REAL data(n),respns(m),resp(n),ans(n2),t3(n),DUMMY OPEN(UNIT=1, FILE='C:\QKBAS20\FDATA1.DAT') DO 1 i=1,N READ(1,*) T3(i), data(i), DUMMY continue CLOSE(UNIT-1) do 12 i=1,N respns(i)=data(i) resp(i)=respns(i) continue isign=-1 call convlv(data,N,resp,M,isign,ans) OPEN(UNIT=1,FILE='C:\QKBAS20\FDATA9.DAT') DO 14 i=1,N WRITE(1,*) T3(i), ans(i) continue END SUBROUTINE CONVLV(data,n,respns,m,isign,ans) INTEGER isign,m,n,NMAX REAL data(n),respns(n) COMPLEX ans(n) PARAMETER (NMAX=4096) * uses realft, twofft INTEGER i,no2 COMPLEX fft (NMAX) do 11 i=1, (m-1)/2 respns(n+1-i)=respns(m+1-i) continue do 12 i=(m+3)/2,n-(m-1)/2 respns(i)=0.0 continue call twofft (data,respns,fft,ans,n) no2=n/2 do 13 i=1,no2+1 if (isign.eq.1) then ans(i)=fft(i)*ans(i)/no2 else if (isign.eq.-1) then if (abs(ans(i)) .eq.0.0) pause ans(i)=fft(i)/ans(i)/no2 else pause 'no meaning for isign in convlv' endif continue ans(1)=cmplx(real (ans(1)),real (ans(no2+1))) call realft(ans,n,-1) return END SUBROUTINE realft(data,n,isign) INTEGER isign,n REAL data(n) * uses four1 INTEGER i,i1,i2,i3,i4,n2p3 REAL c1,c2,hli,hir,h2i,h2r,wis,wrs DOUBLE PRECISION theta,wi,wpi,wpr,wr,wtemp theta=3.141592653589793d0/dble(n/2) cl=0.5 if (isign.eq.1) then c2=-0.5 call four1(data,n/2,+1) else c2=0.5 theta=-theta endif (etc.,etc., etc.) SUBROUTINE twofft(data,data2,fft1,fft2,n) INTEGER n REAL data1(n,data2(n) COMPLEX fft1(n), fft2(n) * uses four1 INTEGER j,n2 COMPLEX h1,h2,c1,c2 c1=cmplx(0.5,0.0) c2=cmplx(0.0,-0.5) do 11 j=1,n fft1(j)=cmplx(data1(j),data2(j) continue call four1 (fft1,n,1) fft2(1)=cmplx(aimag(fft1(1)),0.0) fft1(1)=cmplx(real(fft1(1)),0.0) n2=n+2 do 12 j=2,n/2+1 h1=c1*(fft1(j)+conjg(fft1(n2-j))) h2=c2*(fft1(j)-conjg(fft1(n2-j))) fft1(j)=h1 fft1(n2-j)=conjg(h1) fft2(j)=h2 fft2(n2-j)=conjg(h2) continue return END SUBROUTINE four1(data,nn,isign) INTEGER isign,nn REAL data(2*nn) INTEGER i,istep,j,m,mmax,n REAL tempi,tempr DOUBLE PRECISION theta, wi,wpi,wpr,wr,wtemp n=2*nn j=1 do 11 i=1,n,2 if(j.gt.i)then tempr=data(j) tempi=data(j+1) (etc.,etc.,etc.,) continue mmax=istep goto 2 endif return END There are 4 subroutines with this that are about 3 pages of code and whould be much easier to e-mail to someone if their able to help me with this.My e-mail is [email protected] , or if someone could tell me where to get a "working" compiler that could recompile this? THANK-YOU, THANK-YOU,and THANK-YOU for any help with this! The errors Iam getting are; 1.In a call to CONVLV from another procedure,the first argument was of a type REAL(kind=1), it is now a COMPLEX(kind=1) 2.In a call to REALFT from another procedure, ... COMPLEX(kind=1) it is now a REAL(kind=1) 3.In a call to TWOFFT from...COMPLEX(kind-1) it is now a REAL(kind=1) 4.In a previous call to FOUR1, the first argument was of a type REAL(kind=1) it is now a COMPLEX(kind=1).

    Read the article

  • Problem with missing JSON functions on PHP 5.2.6 / Plesk 8.4

    - by Drachenviech
    I have a vserver running openSuse 10.3, Apache 2 and Plesk 8.4. I can update/upgrade neither, as it is apparently not recommended to upgrade openSuse 10.3 (and an update to the EOL 10.4 does not seem to make much sense) and Plesk fails to update no matter what version I try (even fails to upgrade to 8.4.1). Still I can live with that somehow, primarily because I don’t have the time to do a fresh remote install on the vserver. What really is a problem is, that though the installed PHP is 5.2.6 it has no zip library and no json functions. The first is probably because PHP was not compiled with --enable-zip. The second is a big mystery though. As I understand it, it always comes with PHP unless its compiled with the --disable-json configure option. This is however not the case. And the json extension module is just not there. I even tried to enable it with extension=json.so with no luck either. the configure options of my PHP are (as shipped with Plesk 8.4) '../configure' '--prefix=/usr' '--datadir=/usr/share/php5' '--mandir=/usr/share/man' '--bindir=/usr/bin' '--with-libdir=lib' '--includedir=/usr/include' '--sysconfdir=/etc/php5/apache2' '--with-config-file-path=/etc/php5/apache2' '--with-config-file-scan-dir=/etc/php5/conf.d' '--enable-libxml' '--enable-session' '--with-mm' '--with-pcre-regex=/usr' '--enable-xml' '--enable-simplexml' '--enable-spl' '--enable-filter' '--disable-debug' '--enable-inline-optimization' '--disable-rpath' '--disable-static' '--enable-shared' '--program-suffix=5' '--with-pic' '--with-gnu-ld' '--with-system-tzdata=/usr/share/zoneinfo' '--with-apxs2=/usr/sbin/apxs2' '--disable-all' '--disable-cli' As I understand it, PECL is not an option with 5.2.6. Or am I mistaken? Even if I was not, the openSuse repository only goes as far as PHP 5.2.4. The openSuse install even came without zypper, which I had to manually install. So is there a way to get ziplib and json running in PHP 5.2.6 without having to recompile the binary?

    Read the article

< Previous Page | 129 130 131 132 133 134 135 136 137 138 139 140  | Next Page >