Search Results

Search found 5671 results on 227 pages for 'sub tuts'.

Page 132/227 | < Previous Page | 128 129 130 131 132 133 134 135 136 137 138 139  | Next Page >

  • Cancel the calculation if input-mismatch was found

    - by Lert Pianapitham
    Hallo everybody, i have programmed a sub procedure, that will be called in the main procedure (called by event of mainForm), to valid the inputs before the main calculation. now i'm searching for a method, how can i cancel the calculation and refocus the mainForm if some inputs mismatch. i think, it's unnecessary to use the Try-Catch statment to trap the error from calculation because i know what is its source and it should be prevented due to the code preformance. Has someone an idea to due with this? best regards Lert Pianapitham

    Read the article

  • Folder copy VC++

    - by sijith
    i want to copy a directory from one drive to another drive. My selected directory contain many sub directories and files. How can i implement the same using vc++

    Read the article

  • What is the need of JavaScript while developing a page?

    - by nepc
    I have been developing websites for some time now and I hardly use any Javascript in my pages. Whatever I can want to do with JavaScript, it is possible through PHP. Just like ajax itself. We can send a regular request instead of a ajax request, can't we? We can use "include" to include sub part of pages. So am i missing something about javascript, that I dont know of?

    Read the article

  • Should I use AJAX or get every data beforehand

    - by rix501
    I have a web app where I need to change a drop down list dynamically depending on another drop down list. I have two options: Get all the data beforehand with PHP and "manage" it later with Javascript. Or get the data the user wants through AJAX. The thing is, that the page loads with all the data by default and the user can later select a sub category to narrow the drop downs. Which of the two options are better (faster, less resource intensive)?

    Read the article

  • How to get the URL of the parent from an iframe in ASP.NET

    - by user163457
    Hi, I have a page on the example.com domain which contains an IFrame, this IFrame loads an ASP.NET page (c#) from the example2.com domain. From the code behind on the example2.com domain how can I get the URL of the master page? Would it help if the 2 pages were on the same domain, so example.com contains an iframe with sub.example.com? Thanks

    Read the article

  • Margin totals in xtabs

    - by James
    If you have 2 cross classifying variables you can use rowSums and colSums to produce margin totals on an xtabs output. But how can it be done if you have 3 classifying variables (ie margin totals in each sub table)?

    Read the article

  • style a navigation link when a particular div is shown

    - by Matt Meadows
    I have JQuery working to show a particular div when a certain link is clicked. I have managed to apply the effect I'm after with the main navigation bar through id'ing the body tag and using css to style when the id is found. However, i'd like to apply the same effect to the sub navigation when a certain div is present. How the main navigation is styled: HTML: <nav> <ul> <li id="nav-home"><a href="index.html">Home</a></li> <li id="nav-showreel"><a href="showreel.html">Showreel</a></li> <li id="nav-portfolio"><a href="portfolio.html">Portfolio</a></li> <li>Contact</li> </ul> </nav> CSS: body#home li#nav-home, body#portfolio li#nav-portfolio { background: url("Images/Nav_Underline.png") no-repeat; background-position: center bottom; color: white; } (Other links havent been added to styling as those pages are still in development) How the sub navigation is structured: <nav id="portfolioNav"> <ul> <li id="portfolio-compositing"><a id="compositingWork" href="#">Compositing</a></li> <li id="portfolio-animation"><a id="animationWork" href="#">Animation</a></li> <li id="portfolio-motionGfx"><a id="GFXWork" href="#">Motion Graphics</a></li> <li id="portfolio-3D"><a id="3DWork" href="#">3D</a></li> </ul> </nav> As you can see, its similar format to the main navigation, however i've tried the same approach and it doesn't work :( The Javascript that switches the divs on the navigation click: <script type="text/javascript"> $(document).ready(function() { $('#3DWork').click(function(){ $('#portfolioWork').load('portfolioContent.html #Portfolio3D'); }); $('#GFXWork').click(function(){ $('#portfolioWork').load('portfolioContent.html #motionGraphics'); }); $('#compositingWork').click(function(){ $('#portfolioWork').load('portfolioContent.html #PortfolioCompositing'); }); $('#animationWork').click(function(){ $('#portfolioWork').load('portfolioContent.html #PortfolioAnimation'); }); }); </script> JSFiddle for full HTML & CSS : JSFiddle File The effect I'm After:

    Read the article

  • Run PHP code on a specific domain only

    - by curtismchale
    I need to echo out some specific php code only on the sub-domain of a site. This is where I am so far. <?php if($_SERVER['SERVER_NAME'] != "http://support.demo.com") echo "<?php bb_head(); ?>"; ?> Of course if this worked I'd not be asking a question. Help is appreciated.

    Read the article

  • Using .htaccess to change my website URLs

    - by James P
    I have some pages organised like this: http://localhost/index.html http://localhost/download.html http://localhost/contact.html And I need them changed to suit the following URL structure: http://localhost/ http://localhost/download http://localhost/contact Without making sub directories and putting in pages as index.html. As far as I know .htaccess can be used for this, but I have no idea what I need to add to my .htaccess file to make this work. Can anyone provide some help? Thanks.

    Read the article

  • Varnish + Nginx + multiple IP addresses

    - by adnan
    This is my first shot at making Varnish work on my dedicated server which hosts 2 domains with 2 separate IP-addresses. My simplified setup is as follows: Nginx conf server { listen ip-address-1:8080; } server { listen ip-address-2:8080; } Varnish vcl backend default { .host = "127.0.0.1"; .port = "80"; } And in the varnish conf I have defined VARNISH_LISTEN_PORT=80 Varnish and Nginx (and php-fpm) are running properly but when I try to go to my website it shows the welcome to nginx page. The headers don't have the x-varnish in it. It seems that for some reason varnish is not listening to port 80. I'm suspecting this has to do with the vcl file where it is listening to the 127.0.0.1 host. I'm running two wordpress sites. Where should I look for to get Varnish working properly? Cheers, Adnan EDIT: Nginx seems to be in 8080 correctly but Varnish is not listening to the right ip address. Using Jens multiple varnish ip addresses netstat -lnp yields: Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 46.105.40.241:8080 0.0.0.0:* LISTEN 21610/nginx tcp 0 0 5.135.166.39:8080 0.0.0.0:* LISTEN 21610/nginx tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 21610/nginx tcp 0 0 127.0.0.1:53 0.0.0.0:* LISTEN 2544/named tcp 0 0 0.0.0.0:21 0.0.0.0:* LISTEN 1195/vsftpd tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1184/sshd tcp 0 0 127.0.0.1:953 0.0.0.0:* LISTEN 2544/named tcp 0 0 46.105.40.241:443 0.0.0.0:* LISTEN 21610/nginx tcp 0 0 5.135.166.39:443 0.0.0.0:* LISTEN 21610/nginx tcp 0 0 127.0.0.1:6082 0.0.0.0:* LISTEN 21350/varnishd tcp 0 0 :::80 :::* LISTEN 21351/varnishd tcp 0 0 ::1:53 :::* LISTEN 2544/named tcp 0 0 :::22 :::* LISTEN 1184/sshd tcp 0 0 ::1:953 :::* LISTEN 2544/named udp 0 0 127.0.0.1:53 0.0.0.0:* 2544/named udp 0 0 ::1:53 :::* 2544/named default.vcl backend ikhebeenbril { .host = "5.135.166.39"; .port = "8080"; } backend sunculture { .host = "46.105.40.241"; .port = "8080"; } sub vcl_recv { if (server.ip == "5.135.166.39") { set req.backend = ikhebeenbril; } else { set req.backend = sunculture; } ... } sub vcl_hash { hash_data(server.ip); if (req.http.host) { hash_data(req.http.host); } hash_data(req.url); if (req.http.Accept-Encoding) { hash_data(req.http.Accept-Encoding); } return (hash); } nginx server blocks server { listen 5.135.166.39:80; listen 5.135.166.39:443 default ssl spdy; server_name www.ikhebeenbril.nl; } server { listen 46.105.40.241:80; listen 46.105.40.241:443 default ssl spdy; server_name www.thesunculture.com; }

    Read the article

  • GNU/Linux swapping blocks system

    - by Ole Tange
    I have used GNU/Linux on systems from 4 MB RAM to 512 GB RAM. When they start swapping, most of the time you can still log in and kill off the offending process - you just have to be 100-1000 times more patient. On my new 32 GB system that has changed: It blocks when it starts swapping. Sometimes with full disk activity but other times with no disk activity. To examine what might be the issue I have written this program. The idea is: 1 grab 3% of the memory free right now 2 if that caused swap to increase: stop 3 keep the chunk used for 30 seconds by forking off 4 goto 1 - #!/usr/bin/perl sub freekb { my $free = `free|grep buffers/cache`; my @a=split / +/,$free; return $a[3]; } sub swapkb { my $swap = `free|grep Swap:`; my @a=split / +/,$swap; return $a[2]; } my $swap = swapkb(); my $lastswap = $swap; my $free; while($lastswap >= $swap) { print "$swap $free"; $lastswap = $swap; $swap = swapkb(); $free = freekb(); my $used_mem = "x"x(1024 * $free * 0.03); if(not fork()) { sleep 30; exit(); } } print "Swap increased $swap $lastswap\n"; Running the program forever ought to keep the system at the limit of swapping, but only grabbing a minimal amount of swap and do that very slowly (i.e. a few MB at a time at most). If I run: forever free | stdbuf -o0 timestamp > freelog I ought to see swap slowly rising every second. (forever and timestamp from https://github.com/ole-tange/tangetools). But that is not the behaviour I see: I see swap increasing in jumps and that the system is completely blocked during these jumps. Here the system is blocked for 30 seconds with the swap usage increases with 1 GB: secs 169.527 Swap: 18440184 154184 18286000 170.531 Swap: 18440184 154184 18286000 200.630 Swap: 18440184 1134240 17305944 210.259 Swap: 18440184 1076228 17363956 Blocked: 21 secs. Swap increase 2400 MB: 307.773 Swap: 18440184 581324 17858860 308.799 Swap: 18440184 597676 17842508 330.103 Swap: 18440184 2503020 15937164 331.106 Swap: 18440184 2502936 15937248 Blocked: 20 secs. Swap increase 2200 MB: 751.283 Swap: 18440184 885288 17554896 752.286 Swap: 18440184 911676 17528508 772.331 Swap: 18440184 3193532 15246652 773.333 Swap: 18440184 1404540 17035644 Blocked: 37 secs. Swap increase 2400 MB: 904.068 Swap: 18440184 613108 17827076 905.072 Swap: 18440184 610368 17829816 942.424 Swap: 18440184 3014668 15425516 942.610 Swap: 18440184 2073580 16366604 This is bad enough, but what is even worse is that the system sometimes stops responding at all - even if I wait for hours. I have the feeling it is related to the swapping issue, but I cannot tell for sure. My first idea was to tweak /proc/sys/vm/swappiness from 60 to 0 or 100, just to see if that had any effect at all. 0 did not have an effect, but 100 did cause the problem to arise less often. How can I prevent the system from blocking for such a long time? Why does it decide to swapout 1-3 GB when less than 10 MB would suffice?

    Read the article

  • Install Intel USB 3.0 eXtensible Host Controller Driver for Windows Server 2008 R2 x64

    - by ffrugone
    According to Intel and Dell, by board is technically a 'desktop' board and they therefore do not support Intel USB 3.0 eXtensible Host Controller drivers for Windows Server 2008 (R2 x64). I'm trying to find a workaround. I found an entry on someone who tried to tackle this, but I can't make his fix work for me. Below, I have copied both his entry, and my reply. I'm a loyal stackoverflow user, and hopefully the people here at serverfault can help me: anyforumuser Re: GA-Z77X-UD5H USB3 Drivers not installing? « Reply #6 on: July 05, 2012, 04:12:59 am » Thanks to JoeMiner , his process for the network drivers gave me the clues to figure out to get the USB3 drivers working. I have got the intel USB3 drivers working at full speed in win server 2008r2 you have to edit the following file : 1. mup.xml in change the "Windows7" to "W2K8" 2. in setup.if2 under [groups] line starting with "HSCSDRIVER " change the "IsOS( ... )" entry to "IsOS(WIN2008_R2,WIN2008_R2_MAXSP)" inf files for all copy the content of the [Intel.NTAMD64.6.1] group to the [Intel.NTAMD64.6.2] group driver folders. here i am not entirely sure which is correct so there are some double up's. in the drivers folder copy the "Win7" folder to "win2008" , "win2008_r2" and "x64" ie your drivers folder should now contain the "win2008" , "win2008_r2" and "x64" folders and they contain contents of the win7 folder (the inf files should of already been fixed) Run install , It should install properly and work now. You will have to reboot If it doesn't work remove the intel usb3 controllers from device manager and get it to "scan for hardware changes" Good luck !!! benevida Re: GA-Z77X-UD5H Intel Network Drivers not installing? « Reply #7 on: August 13, 2012, 02:21:14 pm » Thank you anyforumuser! A process for getting this driver installed was exactly what I needed. However, I've hit a snag. I believe I've followed every step exactly as written, but I'm getting an error during installation. I get the message "One or more files that are required for installation are either missing or corrupted. Setup will exit." Behind the error, the 'Setup Progress' shows the current step as "Copying File: C:\Program Files (x86)\Intel\Intel(R) USB 3.0 eXtensible Host Controller Driver\Drivers\iusb3xhc.man". I've checked the installation files, and iusb3xhc.man seems to be a viable file in all of the Windows 2008 sub-directories of the Drivers folder. Therefore I don't see how the file could be missing and I doubt that it is corrupted, (although it does NOT exist in the \Drivers\HCSwitch folder). I opened 'Setup.if2', and there are two aspects to the step of copying iusb3xhc.man that caught my eye. First, the steps immediately preceding are set to 'error=ignore'. If they hadn't completed successfully, this is the first step where we'd hear about it. Second, this is the first step where the relative path '%source%\drivers\%_os%\%_ia%\' is used. If I haven't named the Windows 2008 sub-directories correctly, I could see where things are fouling up. In any event, if someone could take a look and make suggestions I'd appreciate it. Thank you.

    Read the article

  • Call to daemon in a /etc/init.d script is blocking, not running in background

    - by tony
    I have a Perl script that I want to daemonize. Basically this perl script will read a directory every 30 seconds, read the files that it finds and then process the data. To keep it simple here consider the following Perl script (called synpipe_server, there is a symbolic link of this script in /usr/sbin/) : #!/usr/bin/perl use strict; use warnings; my $continue = 1; $SIG{'TERM'} = sub { $continue = 0; print "Caught TERM signal\n"; }; $SIG{'INT'} = sub { $continue = 0; print "Caught INT signal\n"; }; my $i = 0; while ($continue) { #do stuff print "Hello, I am running " . ++$i . "\n"; sleep 3; } So this script basically prints something every 3 seconds. Then, as I want to daemonize this script, I've also put this bash script (also called synpipe_server) in /etc/init.d/ : #!/bin/bash # synpipe_server : This starts and stops synpipe_server # # chkconfig: 12345 12 88 # description: Monitors all production pipelines # processname: synpipe_server # pidfile: /var/run/synpipe_server.pid # Source function library. . /etc/rc.d/init.d/functions pname="synpipe_server" exe="/usr/sbin/synpipe_server" pidfile="/var/run/${pname}.pid" lockfile="/var/lock/subsys/${pname}" [ -x $exe ] || exit 0 RETVAL=0 start() { echo -n "Starting $pname : " daemon ${exe} RETVAL=$? PID=$! echo [ $RETVAL -eq 0 ] && touch ${lockfile} echo $PID > ${pidfile} } stop() { echo -n "Shutting down $pname : " killproc ${exe} RETVAL=$? echo if [ $RETVAL -eq 0 ]; then rm -f ${lockfile} rm -f ${pidfile} fi } restart() { echo -n "Restarting $pname : " stop sleep 2 start } case "$1" in start) start ;; stop) stop ;; status) status ${pname} ;; restart) restart ;; *) echo "Usage: $0 {start|stop|status|restart}" ;; esac exit 0 So, (if I have well understood the doc for daemon) the Perl script should run in the background and the output should be redirected to /dev/null if I execute : service synpipe_server start But here is what I get instead : [root@master init.d]# service synpipe_server start Starting synpipe_server : Hello, I am running 1 Hello, I am running 2 Hello, I am running 3 Hello, I am running 4 Caught INT signal [ OK ] [root@master init.d]# So it starts the Perl script but runs it without detaching it from the current terminal session, and I can see the output printed in my console ... which is not really what I was expecting. Moreover, the PID file is empty (or with a line feed only, no pid returned by daemon). Does anyone have any idea of what I am doing wrong ? EDIT : maybe I should say that I am on a Red Hat machine. Scientific Linux SL release 5.4 (Boron) Would it do the job if instead of using the daemon function, I use something like : nohup ${exe} >/dev/null 2>&1 & in the init script ?

    Read the article

  • ASP.NET MVC 3: Implicit and Explicit code nuggets with Razor

    - by ScottGu
    This is another in a series of posts I’m doing that cover some of the new ASP.NET MVC 3 features: New @model keyword in Razor (Oct 19th) Layouts with Razor (Oct 22nd) Server-Side Comments with Razor (Nov 12th) Razor’s @: and <text> syntax (Dec 15th) Implicit and Explicit code nuggets with Razor (today) In today’s post I’m going to discuss how Razor enables you to both implicitly and explicitly define code nuggets within your view templates, and walkthrough some code examples of each of them.  Fluid Coding with Razor ASP.NET MVC 3 ships with a new view-engine option called “Razor” (in addition to the existing .aspx view engine).  You can learn more about Razor, why we are introducing it, and the syntax it supports from my Introducing Razor blog post. Razor minimizes the number of characters and keystrokes required when writing a view template, and enables a fast, fluid coding workflow. Unlike most template syntaxes, you do not need to interrupt your coding to explicitly denote the start and end of server blocks within your HTML. The Razor parser is smart enough to infer this from your code. This enables a compact and expressive syntax which is clean, fast and fun to type. For example, the Razor snippet below can be used to iterate a collection of products and output a <ul> list of product names that link to their corresponding product pages: When run, the above code generates output like below: Notice above how we were able to embed two code nuggets within the content of the foreach loop.  One of them outputs the name of the Product, and the other embeds the ProductID within a hyperlink.  Notice that we didn’t have to explicitly wrap these code-nuggets - Razor was instead smart enough to implicitly identify where the code began and ended in both of these situations.  How Razor Enables Implicit Code Nuggets Razor does not define its own language.  Instead, the code you write within Razor code nuggets is standard C# or VB.  This allows you to re-use your existing language skills, and avoid having to learn a customized language grammar. The Razor parser has smarts built into it so that whenever possible you do not need to explicitly mark the end of C#/VB code nuggets you write.  This makes coding more fluid and productive, and enables a nice, clean, concise template syntax.  Below are a few scenarios that Razor supports where you can avoid having to explicitly mark the beginning/end of a code nugget, and instead have Razor implicitly identify the code nugget scope for you: Property Access Razor allows you to output a variable value, or a sub-property on a variable that is referenced via “dot” notation: You can also use “dot” notation to access sub-properties multiple levels deep: Array/Collection Indexing: Razor allows you to index into collections or arrays: Calling Methods: Razor also allows you to invoke methods: Notice how for all of the scenarios above how we did not have to explicitly end the code nugget.  Razor was able to implicitly identify the end of the code block for us. Razor’s Parsing Algorithm for Code Nuggets The below algorithm captures the core parsing logic we use to support “@” expressions within Razor, and to enable the implicit code nugget scenarios above: Parse an identifier - As soon as we see a character that isn't valid in a C# or VB identifier, we stop and move to step 2 Check for brackets - If we see "(" or "[", go to step 2.1., otherwise, go to step 3  Parse until the matching ")" or "]" (we track nested "()" and "[]" pairs and ignore "()[]" we see in strings or comments) Go back to step 2 Check for a "." - If we see one, go to step 3.1, otherwise, DO NOT ACCEPT THE "." as code, and go to step 4 If the character AFTER the "." is a valid identifier, accept the "." and go back to step 1, otherwise, go to step 4 Done! Differentiating between code and content Step 3.1 is a particularly interesting part of the above algorithm, and enables Razor to differentiate between scenarios where an identifier is being used as part of the code statement, and when it should instead be treated as static content: Notice how in the snippet above we have ? and ! characters at the end of our code nuggets.  These are both legal C# identifiers – but Razor is able to implicitly identify that they should be treated as static string content as opposed to being part of the code expression because there is whitespace after them.  This is pretty cool and saves us keystrokes. Explicit Code Nuggets in Razor Razor is smart enough to implicitly identify a lot of code nugget scenarios.  But there are still times when you want/need to be more explicit in how you scope the code nugget expression.  The @(expression) syntax allows you to do this: You can write any C#/VB code statement you want within the @() syntax.  Razor will treat the wrapping () characters as the explicit scope of the code nugget statement.  Below are a few scenarios where we could use the explicit code nugget feature: Perform Arithmetic Calculation/Modification: You can perform arithmetic calculations within an explicit code nugget: Appending Text to a Code Expression Result: You can use the explicit expression syntax to append static text at the end of a code nugget without having to worry about it being incorrectly parsed as code: Above we have embedded a code nugget within an <img> element’s src attribute.  It allows us to link to images with URLs like “/Images/Beverages.jpg”.  Without the explicit parenthesis, Razor would have looked for a “.jpg” property on the CategoryName (and raised an error).  By being explicit we can clearly denote where the code ends and the text begins. Using Generics and Lambdas Explicit expressions also allow us to use generic types and generic methods within code expressions – and enable us to avoid the <> characters in generics from being ambiguous with tag elements. One More Thing….Intellisense within Attributes We have used code nuggets within HTML attributes in several of the examples above.  One nice feature supported by the Razor code editor within Visual Studio is the ability to still get VB/C# intellisense when doing this. Below is an example of C# code intellisense when using an implicit code nugget within an <a> href=”” attribute: Below is an example of C# code intellisense when using an explicit code nugget embedded in the middle of a <img> src=”” attribute: Notice how we are getting full code intellisense for both scenarios – despite the fact that the code expression is embedded within an HTML attribute (something the existing .aspx code editor doesn’t support).  This makes writing code even easier, and ensures that you can take advantage of intellisense everywhere. Summary Razor enables a clean and concise templating syntax that enables a very fluid coding workflow.  Razor’s ability to implicitly scope code nuggets reduces the amount of typing you need to perform, and leaves you with really clean code. When necessary, you can also explicitly scope code expressions using a @(expression) syntax to provide greater clarity around your intent, as well as to disambiguate code statements from static markup. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • LLBLGen Pro v3.1 released!

    - by FransBouma
    Yesterday we released LLBLGen Pro v3.1! Version 3.1 comes with new features and enhancements, which I'll describe briefly below. v3.1 is a free upgrade for v3.x licensees. What's new / changed? Designer Extensible Import system. An extensible import system has been added to the designer to import project data from external sources. Importers are plug-ins which import project meta-data (like entity definitions, mappings and relational model data) from an external source into the loaded project. In v3.1, an importer plug-in for importing project elements from existing LLBLGen Pro v3.x project files has been included. You can use this importer to create source projects from which you import parts of models to build your actual project with. Model-only relationships. In v3.1, relationships of the type 1:1, m:1 and 1:n can be marked as model-only. A model-only relationship isn't required to have a backing foreign key constraint in the relational model data. They're ideal for projects which have to work with relational databases where changes can't always be made or some relationships can't be added to (e.g. the ones which are important for the entity model, but are not allowed to be added to the relational model for some reason). Custom field ordering. Although fields in an entity definition don't really have an ordering, it can be important for some situations to have the entity fields in a given order, e.g. when you use compound primary keys. Field ordering can be defined using a pop-up dialog which can be opened through various ways, e.g. inside the project explorer, model view and entity editor. It can also be set automatically during refreshes based on new settings. Command line relational model data refresher tool, CliRefresher.exe. The command line refresh tool shipped with v2.6 is now available for v3.1 as well Navigation enhancements in various designer elements. It's now easier to find elements like entities, typed views etc. in the project explorer from editors, to navigate to related entities in the project explorer by right clicking a relationship, navigate to the super-type in the project explorer when right-clicking an entity and navigate to the sub-type in the project explorer when right-clicking a sub-type node in the project explorer. Minor visual enhancements / tweaks LLBLGen Pro Runtime Framework Entity creation is now up to 30% faster and takes 5% less memory. Creating an entity object has been optimized further by tweaks inside the framework to make instantiating an entity object up to 30% faster. It now also takes up to 5% less memory than in v3.0 Prefetch Path node merging is now up to 20-25% faster. Setting entity references required the creation of a new relationship object. As this relationship object is always used internally it could be cached (as it's used for syncing only). This increases performance by 20-25% in the merging functionality. Entity fetches are now up to 20% faster. A large number of tweaks have been applied to make entity fetches up to 20% faster than in v3.0. Full WCF RIA support. It's now possible to use your LLBLGen Pro runtime framework powered domain layer in a WCF RIA application using the VS.NET tools for WCF RIA services. WCF RIA services is a Microsoft technology for .NET 4 and typically used within silverlight applications. SQL Server DQE compatibility level is now per instance. (Usable in Adapter). It's now possible to set the compatibility level of the SQL Server Dynamic Query Engine (DQE) per instance of the DQE instead of the global setting it was before. The global setting is still available and is used as the default value for the compatibility level per-instance. You can use this to switch between CE Desktop and normal SQL Server compatibility per DataAccessAdapter instance. Support for COUNT_BIG aggregate function (SQL Server specific). The aggregate function COUNT_BIG has been added to the list of available aggregate functions to be used in the framework. Minor changes / tweaks I'm especially pleased with the import system, as that makes working with entity models a lot easier. The import system lets you import from another LLBLGen Pro v3 project any entity definition, mapping and / or meta-data like table definitions. This way you can build repository projects where you store model fragments, e.g. the building blocks for a customer-order system, a user credential model etc., any model you can think of. In most projects, you'll recognize that some parts of your new model look familiar. In these cases it would have been easier if you would have been able to import these parts from projects you had pre-created. With LLBLGen Pro v3.1 you can. For example, say you have an Oracle schema called CRM which contains the bread 'n' butter customer-order-product kind of model. You create an entity model from that schema and save it in a project file. Now you start working on another project for another customer and you have to use SQL Server. You also start using model-first development, so develop the entity model from scratch as there's no existing database. As this customer also requires some CRM like entity model, you import the entities from your saved Oracle project into this new SQL Server targeting project. Because you don't work with Oracle this time, you don't import the relational meta-data, just the entities, their relationships and possibly their inheritance hierarchies, if any. As they're now entities in your project you can change them a bit to match the new customer's requirements. This can save you a lot of time, because you can re-use pre-fab model fragments for new projects. In the example above there are no tables yet (as you work model first) so using the forward mapping capabilities of LLBLGen Pro v3 creates the tables, PK constraints, Unique Constraints and FK constraints for you. This way you can build a nice repository of model fragments which you can re-use in new projects.

    Read the article

  • Wine PPA dependency problems after upgrading to 12.04

    - by Pablo
    I recently upgraded my Ubuntu up to 12.04. Untill that, i can't use synaptic at all. After trying to repair the issue through synaptics, it returns me: installArchives() failed: dpkg: dependency problems prevent configuration of wine1.4-i386:i386: wine1.4-i386:i386 depends on wine1.4-common (= 1.4.1-0ubuntu1~precise1~ppa3); however: Version of wine1.4-common on system is 1.4-0ubuntu4. dpkg: error processing wine1.4-i386:i386 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of wine1.4: wine1.4 depends on wine1.4-amd64 (= 1.4-0ubuntu4); however: Version of wine1.4-amd64 on system is 1.4.1-0ubuntu1~precise1~ppa3. wine1.4 depends on wine1.4-i386 (= 1.4-0ubuntu4); however:No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already Package wine1.4-i386 is not installed. Version of wine1.4-i386:i386 on system is 1.4.1-0ubuntu1~precise1~ppa3. dpkg: error processing wine1.4 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of wine1.4-common: wine1.4-common depends on wine1.4 (= 1.4-0ubuntu4); however: Package wine1.4 is not configured yet. dpkg: error processing wine1.4-common (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of wine1.4-amd64: wine1.4-amd64 depends on wine1.4-common (= 1.4.1-0ubuntu1~precise1~ppa3); however: Version of wine1.4-common on system is 1.4-0ubuntu4. dpkg: error processing wine1.4-amd64 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of wine: wine depends on wine1.4; however: Package wine1.4 is not configured yet. dpkg: error processing wine (--configure): dependency problems - leaving unconfigured dpkg: depeNo apport report written because MaxReports is reached already No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already ndency problems prevent configuration of playonlinux: playonlinux depends on wine | wine-unstable; however: Package wine is not configured yet. Package wine1.4 which provides wine is not configured yet. Package wine-unstable is not installed. dpkg: error processing playonlinux (--configure): dependency problems - leaving unconfigured Errors were encountered while processing: wine1.4-i386:i386 wine1.4 wine1.4-common wine1.4-amd64 wine playonlinux Error in function: SystemError: E:Sub-process /usr/bin/dpkg returned an error code (1) dpkg: dependency problems prevent configuration of wine1.4: wine1.4 depends on wine1.4-amd64 (= 1.4-0ubuntu4); however: Version of wine1.4-amd64 on system is 1.4.1-0ubuntu1~precise1~ppa3. wine1.4 depends on wine1.4-i386 (= 1.4-0ubuntu4); however: Package wine1.4-i386 is not installed. Version of wine1.4-i386:i386 on system is 1.4.1-0ubuntu1~precise1~ppa3. dpkg: error processing wine1.4 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of wine: wine depends on wine1.4; however: Package wine1.4 is not configured yet. dpkg: error processing wine (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of wine1.4-common: wine1.4-common depends on wine1.4 (= 1.4-0ubuntu4); however: Package wine1.4 is not configured yet. dpkg: error processing wine1.4-common (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of wine1.4-amd64: wine1.4-amd64 depends on wine1.4-common (= 1.4.1-0ubuntu1~precise1~ppa3); however: Version of wine1.4-common on system is 1.4-0ubuntu4. dpkg: error processing wine1.4-amd64 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of wine1.4-i386:i386: wine1.4-i386:i386 depends on wine1.4-common (= 1.4.1-0ubuntu1~precise1~ppa3); however: Version of wine1.4-common on system is 1.4-0ubuntu4. dpkg: error processing wine1.4-i386:i386 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of playonlinux: playonlinux depends on wine | wine-unstable; however: Package wine is not configured yet. Package wine1.4 which provides wine is not configured yet. Package wine-unstable is not installed. dpkg: error processing playonlinux (--configure): dependency problems - leaving unconfigured I tried the following: sudo apt-get clean sudo apt-get autoclean sudo apt-get install -f And returnsme: Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: wine1.4 wine1.4-common Suggested packages: dosbox The following packages will be upgraded: wine1.4 wine1.4-common 2 upgraded, 0 newly installed, 0 to remove and 110 not upgraded. 6 not fully installed or removed. Need to get 0 B/1,126 kB of archives. After this operation, 9,216 B of additional disk space will be used. Do you want to continue [Y/n]? y dpkg: dependency problems prevent configuration of wine1.4-i386:i386: wine1.4-i386:i386 depends on wine1.4-common (= 1.4.1-0ubuntu1~precise1~ppa3); however: Version of wine1.4-common on system is 1.4-0ubuntu4. dpkg: error processing wine1.4-i386:i386 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of wine1.4: wine1.4 depends on wine1.4-amd64 (= 1.4-0ubuntu4); however: Version of wine1.4-amd64 on system is 1.4.1-0ubuntu1~precise1~ppa3. wine1.4 depends on wine1.4-i386 (= 1.4-0ubuntu4); however: Package wine1.4-i386 is not installed. Version of wine1.4-i386:i386 on system is 1.4.1-0ubuntu1~precise1~ppa3. No apport report written because the error message indicates its a followup error from a previous failure. dpkg: error processing wine1.4 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of wine1.4-common: wine1.4-common depends on wine1.4 (= 1.4-0ubuntu4); however: Package wine1.4 is not configured yet. dpkg: error processing wine1.4-common (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of wine1.4-amd64: wine1.4-amd64 depends on wine1.4-common (= 1.4.1-0ubuntu1~precise1~ppa3); however: Version of wine1.4-common on system is 1.4-0ubuntu4. dpkg: error processing wine1.4-amd64 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of wine: wine depends on wine1.4; however: Package wine1.4 is not configured yet. dpkg: error processing wine (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of playonlinux: playonlinux depends on wine | wine-unstable; however: PackagNo apport report written because the error message indicates its a followup error from a previous failure. No apport report written because the error message indicates its a followup error from a previous failure. No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already e wine is not configured yet. Package wine1.4 which provides wine is not configured yet. Package wine-unstable is not installed. dpkg: error processing playonlinux (--configure): dependency problems - leaving unconfigured Errors were encountered while processing: wine1.4-i386:i386 wine1.4 wine1.4-common wine1.4-amd64 wine playonlinux E: Sub-process /usr/bin/dpkg returned an error code (1) Cna anyone helpme? I've been searching for a sollution for days and still can't solve it. Thanks! Pablo

    Read the article

  • VS 2010 Debugger Improvements (BreakPoints, DataTips, Import/Export)

    - by ScottGu
    This is the twenty-first in a series of blog posts I’m doing on the VS 2010 and .NET 4 release.  Today’s blog post covers a few of the nice usability improvements coming with the VS 2010 debugger.  The VS 2010 debugger has a ton of great new capabilities.  Features like Intellitrace (aka historical debugging), the new parallel/multithreaded debugging capabilities, and dump debuging support typically get a ton of (well deserved) buzz and attention when people talk about the debugging improvements with this release.  I’ll be doing blog posts in the future that demonstrate how to take advantage of them as well.  With today’s post, though, I thought I’d start off by covering a few small, but nice, debugger usability improvements that were also included with the VS 2010 release, and which I think you’ll find useful. Breakpoint Labels VS 2010 includes new support for better managing debugger breakpoints.  One particularly useful feature is called “Breakpoint Labels” – it enables much better grouping and filtering of breakpoints within a project or across a solution.  With previous releases of Visual Studio you had to manage each debugger breakpoint as a separate item. Managing each breakpoint separately can be a pain with large projects and for cases when you want to maintain “logical groups” of breakpoints that you turn on/off depending on what you are debugging.  Using the new VS 2010 “breakpoint labeling” feature you can now name these “groups” of breakpoints and manage them as a unit. Grouping Multiple Breakpoints Together using a Label Below is a screen-shot of the breakpoints window within Visual Studio 2010.  This lists all of the breakpoints defined within my solution (which in this case is the ASP.NET MVC 2 code base): The first and last breakpoint in the list above breaks into the debugger when a Controller instance is created or released by the ASP.NET MVC Framework. Using VS 2010, I can now select these two breakpoints, right-click, and then select the new “Edit labels…” menu command to give them a common label/name (making them easier to find and manage): Below is the dialog that appears when I select the “Edit labels” command.  We can use it to create a new string label for our breakpoints or select an existing one we have already defined.  In this case we’ll create a new label called “Lifetime Management” to describe what these two breakpoints cover: When we press the OK button our two selected breakpoints will be grouped under the newly created “Lifetime Management” label: Filtering/Sorting Breakpoints by Label We can use the “Search” combobox to quickly filter/sort breakpoints by label.  Below we are only showing those breakpoints with the “Lifetime Management” label: Toggling Breakpoints On/Off by Label We can also toggle sets of breakpoints on/off by label group.  We can simply filter by the label group, do a Ctrl-A to select all the breakpoints, and then enable/disable all of them with a single click: Importing/Exporting Breakpoints VS 2010 now supports importing/exporting breakpoints to XML files – which you can then pass off to another developer, attach to a bug report, or simply re-load later.  To export only a subset of breakpoints, you can filter by a particular label and then click the “Export breakpoint” button in the Breakpoints window: Above I’ve filtered my breakpoint list to only export two particular breakpoints (specific to a bug that I’m chasing down).  I can export these breakpoints to an XML file and then attach it to a bug report or email – which will enable another developer to easily setup the debugger in the correct state to investigate it on a separate machine.  Pinned DataTips Visual Studio 2010 also includes some nice new “DataTip pinning” features that enable you to better see and track variable and expression values when in the debugger.  Simply hover over a variable or expression within the debugger to expose its DataTip (which is a tooltip that displays its value)  – and then click the new “pin” button on it to make the DataTip always visible: You can “pin” any number of DataTips you want onto the screen.  In addition to pinning top-level variables, you can also drill into the sub-properties on variables and pin them as well.  Below I’ve “pinned” three variables: “category”, “Request.RawUrl” and “Request.LogonUserIdentity.Name”.  Note that these last two variable are sub-properties of the “Request” object.   Associating Comments with Pinned DataTips Hovering over a pinned DataTip exposes some additional UI within the debugger: Clicking the comment button at the bottom of this UI expands the DataTip - and allows you to optionally add a comment with it: This makes it really easy to attach and track debugging notes: Pinned DataTips are usable across both Debug Sessions and Visual Studio Sessions Pinned DataTips can be used across multiple debugger sessions.  This means that if you stop the debugger, make a code change, and then recompile and start a new debug session - any pinned DataTips will still be there, along with any comments you associate with them.  Pinned DataTips can also be used across multiple Visual Studio sessions.  This means that if you close your project, shutdown Visual Studio, and then later open the project up again – any pinned DataTips will still be there, along with any comments you associate with them. See the Value from Last Debug Session (Great Code Editor Feature) How many times have you ever stopped the debugger only to go back to your code and say: $#@! – what was the value of that variable again??? One of the nice things about pinned DataTips is that they keep track of their “last value from debug session” – and you can look these values up within the VB/C# code editor even when the debugger is no longer running.  DataTips are by default hidden when you are in the code editor and the debugger isn’t running.  On the left-hand margin of the code editor, though, you’ll find a push-pin for each pinned DataTip that you’ve previously setup: Hovering your mouse over a pinned DataTip will cause it to display on the screen.  Below you can see what happens when I hover over the first pin in the editor - it displays our debug session’s last values for the “Request” object DataTip along with the comment we associated with them: This makes it much easier to keep track of state and conditions as you toggle between code editing mode and debugging mode on your projects. Importing/Exporting Pinned DataTips As I mentioned earlier in this post, pinned DataTips are by default saved across Visual Studio sessions (you don’t need to do anything to enable this). VS 2010 also now supports importing/exporting pinned DataTips to XML files – which you can then pass off to other developers, attach to a bug report, or simply re-load later. Combined with the new support for importing/exporting breakpoints, this makes it much easier for multiple developers to share debugger configurations and collaborate across debug sessions. Summary Visual Studio 2010 includes a bunch of great new debugger features – both big and small.  Today’s post shared some of the nice debugger usability improvements. All of the features above are supported with the Visual Studio 2010 Professional edition (the Pinned DataTip features are also supported in the free Visual Studio 2010 Express Editions)  I’ll be covering some of the “big big” new debugging features like Intellitrace, parallel/multithreaded debugging, and dump file analysis in future blog posts.  Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

< Previous Page | 128 129 130 131 132 133 134 135 136 137 138 139  | Next Page >