Search Results

Search found 16022 results on 641 pages for 'model export'.

Page 278/641 | < Previous Page | 274 275 276 277 278 279 280 281 282 283 284 285  | Next Page >

  • JTable.removeColumn() method throws exception

    - by sanjeev
    To hide a column from only the view of JTable, i am using the removeColumn() method. But it throws the exception Exception in thread "AWT-EventQueue-0" java.lang.ArrayIndexOutOfBoundsException: 7 >= 7 at java.util.Vector.elementAt(Vector.java:470) at javax.swing.table.DefaultTableColumnModel.getColumn(DefaultTableColumnModel.java:294) at javax.swing.plaf.basic.BasicTableHeaderUI.paint(BasicTableHeaderUI.java:648) i think, after removing column from the view, if i modified the model, then this exception pops out. is it because of there is no column in view, while the model is updating the table ? What is the best way to hide the column in view in JTable ? insteading of setting the sizes to 0.

    Read the article

  • What type of data should I send to view?

    - by Vizualni
    Hello, this question I've been asking myself since the day I started programming in MVC way. Should I send to view arrays filled with data or should I send it as on objects I retrieved from database? My model returns me data as objects. What would be the best way to create such a thing? $new_data = $model->find_by_id(1); echo $new_data->name; $new_data->name= "whatever"; $new_data->save(); For example. view.php echo $object->name; or echo $array['name'] Language is php :).

    Read the article

  • Qt drag/drop: cannot move when copy is enabled (Ubuntu Gnome)

    - by Scott
    I'm implementing a view and a model where I want to support both moving items internally (by dragging), and copying items (by pressing Ctrl while dragging). I've done everything I need to do according to the instructions. I've set up the mime functions, I've implemented removeRows(), and flags(). The problem is when I drag, it defaults to a copy operation (I get the arrow cursor with a plus sign, and it indeed copies the item by creating a new one in the model). The only difference I can see is this: If I return only Qt::MoveAction in supportedDropActions(), it only moves. If I return (Qt::CopyAction | Qt::MoveAction), it only copies. Any ideas? I want it to work like files in Nautilus (Gnome) or Windows file Explorer: drag moves icons around, ctrl+drag copies them.

    Read the article

  • Windows 7 - User-specific %PATH%

    - by MiffTheFox
    I'd like to set up a system for Windows 7 where each user has their own private directory in %PATH%. I tried setting %PATH% to %HOMEDRIVE%%HOMEPATH%\Bin;%SystemRoot%\System32;[...] but it doesn't seem to work. For those who don't realize what I'm trying to do, it's sort of like EXPORT PATH=~/bin would be on *nix. It can be on a user specific basis if need be (and that would actually be prefer to the machine-wide settings).

    Read the article

  • Normalization two types of customers into one table

    - by JDewzy
    I am trying to model a sales situation where you can sell to a person or to a business with a contact person. I cannot figure out the proper way to do this. It seems like 2 tables would be incorrect. But how do I model a Customer table that can be a business or a person? Would I just have a boolean for "business" and an additional "business_name" field that would default to Null. But then I have to do an if/then on the columns and that seems like poor design. Any advice, direction, or links is appreciated.

    Read the article

  • Can't logging in file from tomcat6 with log4j

    - by Ivan Nakov
    I have one stupid problem, which is killing me from hours. I'm trying to configure loggin to my project. I started with a simple Spring MVC project generated by STS, then added org.apache.log4j.RollingFileAppender to the existing log4j.xml file. <?xml version="1.0" encoding="UTF-8"?> <!-- Appenders --> <appender name="console" class="org.apache.log4j.ConsoleAppender"> <param name="Target" value="System.out" /> <layout class="org.apache.log4j.PatternLayout"> <param name="ConversionPattern" value="%-5p: %c - %m%n" /> </layout> </appender> <appender name="FilleAppender" class="org.apache.log4j.RollingFileAppender"> <param name="maxFileSize" value="100KB" /> <param name="maxBackupIndex" value="2" /> <param name="File" value="/home/ivan/Desktop/app.log" /> <layout class="org.apache.log4j.PatternLayout"> <param name="ConversionPattern" value="%d{ABSOLUTE} %5p %c{1}: %m%n " /> </layout> </appender> <!-- Application Loggers --> <logger name="org.elsys.logger"> <level value="debug" /> </logger> <!-- 3rdparty Loggers --> <logger name="org.springframework.core"> <level value="info" /> </logger> <logger name="org.springframework.beans"> <level value="info" /> </logger> <logger name="org.springframework.context"> <level value="info" /> </logger> <logger name="org.springframework.web"> <level value="info" /> </logger> <!-- Root Logger --> <root> <priority value="debug" /> <appender-ref ref="FilleAppender" /> </root> When I deploy project to tomcat6 server and open the url, logger doesn't generate log file. I'm trying to log from this controller: @Controller public class HomeController { private static final Logger logger = LoggerFactory.getLogger(HomeController.class); /** * Simply selects the home view to render by returning its name. */ @RequestMapping(value = "/", method = RequestMethod.GET) public String home(Locale locale, Model model) { logger.info("Welcome home! the client locale is "+ locale.toString()); Date date = new Date(); DateFormat dateFormat = DateFormat.getDateTimeInstance(DateFormat.LONG, DateFormat.LONG, locale); String formattedDate = dateFormat.format(date); logger.debug("send view"); model.addAttribute("serverTime", formattedDate ); return "home"; } } When I log from this simple Main.class, it works correct. public class Main { public static void main(String[] args) { Logger log = LoggerFactory.getLogger(Main.class); log.debug("Test"); } } I'm using tomcat6 and Ubuntu 11.10. I made a research in net and i found various options to fix this problem, but they don't help me. Please if someone have ideas how to fix it, help me.

    Read the article

  • ubuntu 10.04; kvm bridged networking not working with public ip addresses

    - by senorsmile
    I have a dedicated hosted server box with ubuntu 10.04 64 bit installed. I would like to run kvm with ubuntu 8.04 installed for some php 5.2 compatible apps(they don't work right with php 5.3, the default in ubuntu 10.04). I installed KVM as instructed at https://help.ubuntu.com/community/KVM/Installation . I installed the vm using virt-manager. I never could figure out how use virt-install or any of those automated installers. I just installed it using the disc. I set up bridged networking as per https://help.ubuntu.com/community/KVM/Networking . However, the bridged connection doesn't work. Here's my /etc/network/interfaces on the host, running ubuntu 10.04. (with specific public ip blanked) auto lo iface lo inet loopback auto eth0 iface eth0 inet manual auto br0 iface br0 inet static address xx.xx.xx.xx netmask 255.255.255.248 gateway xx.xx.xx.xa bridge_ports eth0 bridge_stp on bridge_fd 0 bridge_maxwait 10 ` Here's my /etc/network/interfaces on the guest, running ubuntu 8.04. auto lo iface lo inet loopback auto eth0 iface eth0 inet static address xx.xx.xx.xy netmask 255.255.255.248 gateway xx.xx.xx.xa The two vm's can communicate to each other. But, the guest vm can't access anyone in the real world. Here's my /etc/libvirt/qemu/store_804.xml <domain type='kvm'> <name>store_804</name> <uuid>27acfb75-4f90-a34c-9a0b-70a6927ae84c</uuid> <memory>2097152</memory> <currentMemory>2097152</currentMemory> <vcpu>2</vcpu> <os> <type arch='x86_64' machine='pc-0.12'>hvm</type> <boot dev='hd'/> </os> <features> <acpi/> <apic/> <pae/> </features> <clock offset='utc'/> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/bin/kvm</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw'/> <source file='/var/lib/libvirt/images/store_804.img'/> <target dev='hda' bus='ide'/> </disk> <disk type='block' device='cdrom'> <driver name='qemu' type='raw'/> <target dev='hdc' bus='ide'/> <readonly/> </disk> <interface type='bridge'> <mac address='52:54:00:26:0b:c6'/> <source bridge='br0'/> <model type='virtio'/> </interface> <console type='pty'> <target port='0'/> </console> <console type='pty'> <target port='0'/> </console> <input type='mouse' bus='ps2'/> <graphics type='vnc' port='-1' autoport='yes'/> <sound model='es1370'/> <video> <model type='cirrus' vram='9216' heads='1'/> </video> </devices> </domain> Any idea where I've gone wrong?

    Read the article

  • Adding A Custom Dropdown in RCDC for Forefront Identity Manager 2010

    - by Daniel Lackey
    My latest exploration has been FIM 2010 for Identity Management. The following is a post of how to add a custom dropdown for the FIM Portal. I have decided to document this as I cannot find documentation on how to do this anywhere else. I hope that it finds useful to others.   For starters, this was to me not an easy task to figure out. I really would like to know why it is so cumbersome to do something that seems like a lot of people would need to do, but that’s for another day J   The dropdown I wanted to add was for ‘Account Status’ which would display if the account is ‘Enabled’ or ‘Disabled’ in the data source Active Directory. This option would also allow helpdesk users or admins to administer the userAccountControl attribute in AD from the FIM Portal interface.   The first thing I had to do was create the attribute itself. This is done by going to Administration à Schema Management from the FIM 2010 portal. Once here, you click on All Attributes. What is listed here are all attributes and their associated Resource Types in FIM. To create the ‘AccountStatus’ attribute, click on New. As shown below, enter ‘AccountStatus’ with no spaces for the System Name and ‘Account Status’ for the Display Name. The Data Type is going to be ‘Indexed String’. Click Next.           Leave everything on the Localization tab default and click Next.   On the Validation tab as shown below, we will enter the regex expression ^(Enabled|Disabled)?$ with our two desired string values ‘Enabled’ and ‘Disabled’. Click on Finish and then and Submit to complete adding the attribute.       The next step involves associating the attribute with a resource type. This is called ‘Binding’ the attribute. From the Schema Management page, click on All Bindings. From the page that comes up, click on New. As shown below, enter ‘User’ for the Resource Type and ‘Account Status’ for the Attribute Type. This is essentially binding the Account Status attribute to the ‘User’ Resource Type. Click Next.    On the ‘Attribute Override’ tab, type in ‘Account Status’ for the Display Name field. Click Next.   On the ‘Localization’ tab, click Next.   On the ‘Validation’ tab, enter the regex expression ^(Enabled|Disabled)?$ we entered previously for the attribute. Click Finish and then Submit to complete.   Now that the Attribute and the Binding are complete, you have to give users permission to see the attribute on the User Edit page. Go to Administration à Management Policy Rules. Look for the rule named Administration: Administrators can read and update Users and click on it. Once it opens, click on the ‘Target Resources’ tab and look at the section named Resource Attributes. Type in at the end the ‘Account Status’ attribute and check it with the validator. Once done click on OK to save the changes.         Lastly, we need to add the actual dropdown control to the RCDC (Resource Control Display Configuration) for User Editing. Go to Administration à Resource Control Display Configuration. From here navigate until you find the RCDC named Configuration for User Editing RCDC and click on it. The following is what you will see:       First step is to export the Configuration Data file. Click on the Export configuration link and save the file to your desktop of other folder.   Find the file you just exported and open the file in your XML editor of choice. I use notepad but anything will work. Since we are adding a dropdown control, first find another control in the existing file that is already a dropdown in FIM. I used EmployeeType as my example. Copy the control from the beginning tag named <my:Control… to the ending tag </my:Control>. Now take what you copied and paste it in whatever location you desire within the form between two other controls. I chose to place the ‘Account Status’ field after the ‘Account Name’ field. After you paste the control you will need to modify so it looks like this:       Notice where you specify what attribute you are dealing with where it has AccountStatus in the XML. Once you are complete with modifying this, save the file and make sure it is a .xml file.   Now go back to the Configuration for User Editing screen and look at the section named ‘Configuration Data’. Click the ‘Browse’ button and find the XML file you just modified and choose it. Click OK on the bottom of the window and you are done!   Now when you click on a user’s name in the FIM Portal, you should see the newly added dropdown box as below:       Later I will post more about this drop down, specifically on how to automate actually ‘Disabling’ the account in the data source through the FIM Workflows and MAs.   <my:Control my:Name="AccountStatus" my:TypeName="UocDropDownList" my:Caption="{Binding Source=schema, Path=AccountStatus.DisplayName}" my:Description="{Binding Source=schema, Path=AccountStatus.Description}" my:RightsLevel="{Binding Source=rights, Path=AccountStatus}"> <my:Properties> <my:Property my:Name="ValuePath" my:Value="Value"/> <my:Property my:Name="CaptionPath" my:Value="Caption"/> <my:Property my:Name="HintPath" my:Value="Hint"/> <my:Property my:Name="ItemSource" my:Value="{Binding Source=schema, Path=AccountStatus.LocalizedAllowedValues}"/> <my:Property my:Name="SelectedValue" my:Value="{Binding Source=object, Path=AccountStatus, Mode=TwoWay}"/> </my:Properties> </my:Control>

    Read the article

  • Using a service registry that doesn’t suck part II: Dear registry, do you have to be a message broker?

    - by gsusx
    Continuing our series of posts about service registry patterns that suck, we decided to address one of the most common techniques that Service Oriented (SOA) governance tools use to enforce policies. Scenario Service registries and repositories serve typically as a mechanism for storing service policies that model behaviors such as security, trust, reliable messaging, SLAs, etc. This makes perfect sense given that SOA governance registries were conceived as a mechanism to store and manage the policies...(read more)

    Read the article

  • MEF CompositionInitializer for WPF

    - by Reed
    The Managed Extensibility Framework is an amazingly useful addition to the .NET Framework.  I was very excited to see System.ComponentModel.Composition added to the core framework.  Personally, I feel that MEF is one tool I’ve always been missing in my .NET development. Unfortunately, one perfect scenario for MEF tends to fall short of it’s full potential is in Windows Presentation Foundation development.  In particular, there are many times when the XAML parser constructs objects in WPF development, which makes composition of those parts difficult.  The current release of MEF (Preview Release 9) addresses this for Silverlight developers via System.ComponentModel.Composition.CompositionInitializer.  However, there is no equivalent class for WPF developers. The CompositionInitializer class provides the means for an object to compose itself.  This is very useful with WPF and Silverlight development, since it allows a View, such as a UserControl, to be generated via the standard XAML parser, and still automatically pull in the appropriate ViewModel in an extensible manner.  Glenn Block has demonstrated the usage for Silverlight in detail, but the same issues apply in WPF. As an example, let’s take a look at a very simple case.  Take the following XAML for a Window: <Window x:Class="WpfApplication1.MainView" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="MainWindow" Height="220" Width="300"> <Grid> <TextBlock Text="{Binding TheText}" /> </Grid> </Window> This does nothing but create a Window, add a simple TextBlock control, and use it to display the value of our “TheText” property in our DataContext class.  Since this is our main window, WPF will automatically construct and display this Window, so we need to handle constructing the DataContext and setting it ourselves. We could do this in code or in XAML, but in order to do it directly, we would need to hard code the ViewModel type directly into our XAML code, or we would need to construct the ViewModel class and set it in the code behind.  Both have disadvantages, and the disadvantages grow if we’re using MEF to compose our ViewModel. Ideally, we’d like to be able to have MEF construct our ViewModel for us.  This way, it can provide any construction requirements for our ViewModel via [ImportingConstructor], and it can handle fully composing the imported properties on our ViewModel.  CompositionInitializer allows this to occur. We use CompositionInitializer within our View’s constructor, and use it for self-composition of our View.  Using CompositionInitializer, we can modify our code behind to: public partial class MainView : Window { public MainView() { InitializeComponent(); CompositionInitializer.SatisfyImports(this); } [Import("MainViewModel")] public object ViewModel { get { return this.DataContext; } set { this.DataContext = value; } } } We then can add an Export on our ViewModel class like so: [Export("MainViewModel")] public class MainViewModel { public string TheText { get { return "Hello World!"; } } } MEF will automatically compose our application, decoupling our ViewModel injection to the DataContext of our View until runtime.  When we run this, we’ll see: There are many other approaches for using MEF to wire up the extensible parts within your application, of course.  However, any time an object is going to be constructed by code outside of your control, CompositionInitializer allows us to continue to use MEF to satisfy the import requirements of that object. In order to use this from WPF, I’ve ported the code from MEF Preview 9 and Glenn Block’s (now obsolete) PartInitializer port to Windows Presentation Foundation.  There are some subtle changes from the Silverlight port, mainly to handle running in a desktop application context.  The default behavior of my port is to construct an AggregateCatalog containing a DirectoryCatalog set to the location of the entry assembly of the application.  In addition, if an “Extensions” folder exists under the entry assembly’s directory, a second DirectoryCatalog for that folder will be included.  This behavior can be overridden by specifying a CompositionContainer or one or more ComposablePartCatalogs to the System.ComponentModel.Composition.Hosting.CompositionHost static class prior to the first use of CompositionInitializer. Please download CompositionInitializer and CompositionHost for VS 2010 RC, and contact me with any feedback. Composition.Initialization.Desktop.zip Edit on 3/29: Glenn Block has since updated his version of CompositionInitializer (and ExportFactory<T>!), and made it available here: http://cid-f8b2fd72406fb218.skydrive.live.com/self.aspx/blog/Composition.Initialization.Desktop.zip This is a .NET 3.5 solution, and should soon be pushed to CodePlex, and made available on the main MEF site.

    Read the article

  • TFS 2008 Web Access Report 100 record limitation

    - by HosamKamel
    By default TFS 2008 Web Access has the limit of 100 record when you open any query in report mode. Even if you tried to export the query to excel or PDF you will only get first 100 record exported. To overcome this issue, you have to reconfigure this count in the web.config file Navigate to web access files C:\Program Files\Microsoft Visual Studio 2008 Team System Web Access\Wiwa Open web.config modify maxWorkitemsInReportList count to whatever count you need. You need to do modify the same configuration in web.config located here C:\Program Files\Microsoft Visual Studio 2008 Team System Web Access\Web  A full discussion thread exists here Team Foundation Server - Team System Web Access

    Read the article

  • Using Unity – Part 1

    - by nmarun
    I have been going through implementing some IoC pattern using Unity and so I decided to share my learnings (I know that’s not an English word, but you get the point). Ok, so I have an ASP.net project named ProductWeb and a class library called ProductModel. In the model library, I have a class called Product: 1: public class Product 2: { 3: public string Name { get; set; } 4: public string Description { get; set; } 5:  6: public Product() 7: { 8: Name = "iPad"; 9: Description = "Not just a reader!"; 10: } 11:  12: public string WriteProductDetails() 13: { 14: return string.Format("Name: {0} Description: {1}", Name, Description); 15: } 16: } In the Page_Load event of the default.aspx, I’ll need something like: 1: Product product = new Product(); 2: productDetailsLabel.Text = product.WriteProductDetails(); Now, let’s go ‘Unity’fy this application. I assume you have all the bits for the pattern. If not, get it from here. I found this schematic representation of Unity pattern from the above link. This image might not make much sense to you now, but as we proceed, things will get better. The first step to implement the Inversion of Control pattern is to create interfaces that your types will implement. An IProduct interface is added to the ProductModel project. 1: public interface IProduct 2: { 3: string WriteProductDetails(); 4: } Let’s make our Product class to implement the IProduct interface. The application will compile and run as before despite the changes made. Add the following references to your web project: Microsoft.Practices.Unity Microsoft.Practices.Unity.Configuration Microsoft.Practices.Unity.StaticFactory Microsoft.Practices.ObjectBuilder2 We need to add a few lines to the web.config file. The line below tells what version of Unity pattern we’ll be using. 1: <configSections> 2: <section name="unity" type="Microsoft.Practices.Unity.Configuration.UnityConfigurationSection, Microsoft.Practices.Unity.Configuration, Version=1.2.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/> 3: </configSections> Add another block with the same name as the section name declared above – ‘unity’. 1: <unity> 2: <typeAliases> 3: <!--Custom object types--> 4: <typeAlias alias="IProduct" type="ProductModel.IProduct, ProductModel"/> 5: <typeAlias alias="Product" type="ProductModel.Product, ProductModel"/> 6: </typeAliases> 7: <containers> 8: <container name="unityContainer"> 9: <types> 10: <type type="IProduct" mapTo="Product"/> 11: </types> 12: </container> 13: </containers> 14: </unity> From the Unity Configuration schematic shown above, you see that the ‘unity’ block has a ‘typeAliases’ and a ‘containers’ segment. The typeAlias element gives a ‘short-name’ for a type. This ‘short-name’ can be used to point to this type any where in the configuration file (web.config in our case, but all this information could be coming from an external xml file as well). The container element holds all the mapping information. This container is referenced through its name attribute in the code and you can have multiple of these container elements in the containers segment. The ‘type’ element in line 10 basically says: ‘When Unity requests to resolve the alias IProduct, return an instance of whatever the short-name of Product points to’. This is the most basic piece of Unity pattern and all of this is accomplished purely through configuration. So, in future you have a change in your model, all you need to do is - implement IProduct on the new model class and - either add a typeAlias for the new type and point the mapTo attribute to the new alias declared - or modify the mapTo attribute of the type element to point to the new alias (as the case may be). Now for the calling code. It’s a good idea to store your unity container details in the Application cache, as this is rarely bound to change and also adds for better performance. The Global.asax.cs file comes for our rescue: 1: protected void Application_Start(object sender, EventArgs e) 2: { 3: // create and populate a new Unity container from configuration 4: IUnityContainer unityContainer = new UnityContainer(); 5: UnityConfigurationSection section = (UnityConfigurationSection)ConfigurationManager.GetSection("unity"); 6: section.Containers["unityContainer"].Configure(unityContainer); 7: Application["UnityContainer"] = unityContainer; 8: } 9:  10: protected void Application_End(object sender, EventArgs e) 11: { 12: Application["UnityContainer"] = null; 13: } All this says is: create an instance of UnityContainer() and read the ‘unity’ section from the configSections segment of the web.config file. Then get the container named ‘unityContainer’ and store it in the Application cache. In my code-behind file, I’ll make use of this UnityContainer to create an instance of the Product type. 1: public partial class _Default : Page 2: { 3: private IUnityContainer unityContainer; 4: protected void Page_Load(object sender, EventArgs e) 5: { 6: unityContainer = Application["UnityContainer"] as IUnityContainer; 7: if (unityContainer == null) 8: { 9: productDetailsLabel.Text = "ERROR: Unity Container not populated in Global.asax.<p />"; 10: } 11: else 12: { 13: IProduct productInstance = unityContainer.Resolve<IProduct>(); 14: productDetailsLabel.Text = productInstance.WriteProductDetails(); 15: } 16: } 17: } Looking the ‘else’ block, I’m asking the unityContainer object to resolve the IProduct type. All this does, is to look at the matching type in the container, read its mapTo attribute value, get the full name from the alias and create an instance of the Product class. Fabulous!! I’ll go more in detail in the next blog. The code for this blog can be found here.

    Read the article

  • Parallelism in .NET – Part 10, Cancellation in PLINQ and the Parallel class

    - by Reed
    Many routines are parallelized because they are long running processes.  When writing an algorithm that will run for a long period of time, its typically a good practice to allow that routine to be cancelled.  I previously discussed terminating a parallel loop from within, but have not demonstrated how a routine can be cancelled from the caller’s perspective.  Cancellation in PLINQ and the Task Parallel Library is handled through a new, unified cooperative cancellation model introduced with .NET 4.0. Cancellation in .NET 4 is based around a new, lightweight struct called CancellationToken.  A CancellationToken is a small, thread-safe value type which is generated via a CancellationTokenSource.  There are many goals which led to this design.  For our purposes, we will focus on a couple of specific design decisions: Cancellation is cooperative.  A calling method can request a cancellation, but it’s up to the processing routine to terminate – it is not forced. Cancellation is consistent.  A single method call requests a cancellation on every copied CancellationToken in the routine. Let’s begin by looking at how we can cancel a PLINQ query.  Supposed we wanted to provide the option to cancel our query from Part 6: double min = collection .AsParallel() .Min(item => item.PerformComputation()); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } We would rewrite this to allow for cancellation by adding a call to ParallelEnumerable.WithCancellation as follows: var cts = new CancellationTokenSource(); // Pass cts here to a routine that could, // in parallel, request a cancellation try { double min = collection .AsParallel() .WithCancellation(cts.Token) .Min(item => item.PerformComputation()); } catch (OperationCanceledException e) { // Query was cancelled before it finished } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Here, if the user calls cts.Cancel() before the PLINQ query completes, the query will stop processing, and an OperationCanceledException will be raised.  Be aware, however, that cancellation will not be instantaneous.  When cts.Cancel() is called, the query will only stop after the current item.PerformComputation() elements all finish processing.  cts.Cancel() will prevent PLINQ from scheduling a new task for a new element, but will not stop items which are currently being processed.  This goes back to the first goal I mentioned – Cancellation is cooperative.  Here, we’re requesting the cancellation, but it’s up to PLINQ to terminate. If we wanted to allow cancellation to occur within our routine, we would need to change our routine to accept a CancellationToken, and modify it to handle this specific case: public void PerformComputation(CancellationToken token) { for (int i=0; i<this.iterations; ++i) { // Add a check to see if we've been canceled // If a cancel was requested, we'll throw here token.ThrowIfCancellationRequested(); // Do our processing now this.RunIteration(i); } } With this overload of PerformComputation, each internal iteration checks to see if a cancellation request was made, and will throw an OperationCanceledException at that point, instead of waiting until the method returns.  This is good, since it allows us, as developers, to plan for cancellation, and terminate our routine in a clean, safe state. This is handled by changing our PLINQ query to: try { double min = collection .AsParallel() .WithCancellation(cts.Token) .Min(item => item.PerformComputation(cts.Token)); } catch (OperationCanceledException e) { // Query was cancelled before it finished } PLINQ is very good about handling this exception, as well.  There is a very good chance that multiple items will raise this exception, since the entire purpose of PLINQ is to have multiple items be processed concurrently.  PLINQ will take all of the OperationCanceledException instances raised within these methods, and merge them into a single OperationCanceledException in the call stack.  This is done internally because we added the call to ParallelEnumerable.WithCancellation. If, however, a different exception is raised by any of the elements, the OperationCanceledException as well as the other Exception will be merged into a single AggregateException. The Task Parallel Library uses the same cancellation model, as well.  Here, we supply our CancellationToken as part of the configuration.  The ParallelOptions class contains a property for the CancellationToken.  This allows us to cancel a Parallel.For or Parallel.ForEach routine in a very similar manner to our PLINQ query.  As an example, we could rewrite our Parallel.ForEach loop from Part 2 to support cancellation by changing it to: try { var cts = new CancellationTokenSource(); var options = new ParallelOptions() { CancellationToken = cts.Token }; Parallel.ForEach(customers, options, customer => { // Run some process that takes some time... DateTime lastContact = theStore.GetLastContact(customer); TimeSpan timeSinceContact = DateTime.Now - lastContact; // Check for cancellation here options.CancellationToken.ThrowIfCancellationRequested(); // If it's been more than two weeks, send an email, and update... if (timeSinceContact.Days > 14) { theStore.EmailCustomer(customer); customer.LastEmailContact = DateTime.Now; } }); } catch (OperationCanceledException e) { // The loop was cancelled } Notice that here we use the same approach taken in PLINQ.  The Task Parallel Library will automatically handle our cancellation in the same manner as PLINQ, providing a clean, unified model for cancellation of any parallel routine.  The TPL performs the same aggregation of the cancellation exceptions as PLINQ, as well, which is why a single exception handler for OperationCanceledException will cleanly handle this scenario.  This works because we’re using the same CancellationToken provided in the ParallelOptions.  If a different exception was thrown by one thread, or a CancellationToken from a different CancellationTokenSource was used to raise our exception, we would instead receive all of our individual exceptions merged into one AggregateException.

    Read the article

  • Converting a Visual Studio 2003 Web Project to a Visual Studio 2008 Web Application Project

    - by navaneeth
    This walkthrough describes how to convert a Visual Studio .NET 2002 or Visual Studio .NET 2003 Web project to a Visual Studio 2008 Web application project. The Visual Studio 2008 Web application project model is like the Visual Studio 2005 Web application project model. Therefore, the conversion processes are similar. For more information about Web application projects, see ASP.NET Web Application Projects. You can also convert from a Visual Studio .NET Web project to a Visual Studio 2008 Web site project. However, conversion to a Web application project is the approach that is supported, and gives you the convenience of tools to help with the conversion. For example, when you convert to a Visual Studio 2008 Web application project, you can use the Visual Studio Conversion Wizard to automate part of the process. For information about how to convert a Visual Studio .NET Web project to a Visual Studio 2008 Web site, see Common Web Project Conversion Issues and Solutions. There are two parts involved in converting a Visual Studio 2002 or 2003 Web project to a Visual Studio 2008 Web application project. The parts are as follows: Converting the project. You can use the Visual Studio Conversion Wizard for the initial conversion of the project and Web.config files. You can later use the Convert To Web Application command to update the project's files and structure. Upgrading the .NET Framework version of the project. You must upgrade the project's .NET Framework version to either .NET Framework 2.0 SP1 or to .NET Framework 3.5. This .NET Framework version upgrade is required because Visual Studio 2008 cannot target earlier versions of the .NET Framework. You can perform this upgrade during the project conversion, by using the Conversion Wizard. Alternatively, you can upgrade the .NET Framework version after you convert the project.   NoteYou can change a project's .NET Framework version manually. To do so, in Visual Studio open the property pages for the project, click the Application tab, and then select a new version from the Target Framework list. This walkthrough illustrates the following tasks: Opening the Visual Studio .NET project in Visual Studio 2008 and creating a backup of the project files. Upgrading the .NET Framework version that the project targets. Converting the project file and the Web.config file. Converting ASP.NET code files. Testing the converted project. Prerequisites    To complete this walkthrough, you will need: Visual Studio 2008. A Web site project that was created in Visual Studio .NET version 2002 or 2003 that compiles and runs without errors. Converting the Project and Upgrading the .NET Framework Version    To begin, you open the project in Visual Studio 2008, which starts the conversion. It offers you an opportunity to back up the project before converting it. NoteIt is strongly recommended that you back up the project. The conversion works on the original project files, which cannot be recovered if the conversion is not successful.To convert the project and back up the files In Visual Studio 2008, in the File menu, click Open and then click Project. The Open Project dialog box is displayed. Browse to the folder that contains the project or solution file for the Visual Studio .NET project, select the file, and then click Open. NoteMake sure that you open the project by using the Open Project command. If you use the Open Web Site command, the project will be converted to the Web site project format.The Conversion Wizard opens and prompts you to create a backup before converting the project. To create the backup, click Yes. Click Browse, select the folder in which the backup should be created, and then click Next. Click Finish. The backup starts. NoteThere might be significant delays as the Conversion Wizard copies files, with no updates or progress indicated. Wait until the process finishes before you continue.When the conversion finishes, the wizard prompts you to upgrade the targeted version of the .NET Framework for the project. To upgrade to the .NET Framework 3.5, click Yes. To upgrade the project to target the .NET Framework 2.0 SP1, click No. It is recommended that you leave the check box selected that asks whether you want to upgrade all Webs in the solution. If you upgrade to .NET Framework 3.5, the project's Web.config file is modified at the same time as the project file. When the upgrade and conversion have finished, a message is displayed that indicates that you have completed the first step in converting your project. Click OK. The wizard displays status information about the conversion. Click Close. Testing the Converted Project    After the conversion has finished, you can test the project to make sure that it runs. This will also help you identify code in the project that must be updated. To verify that the project runs If you know about changes that are required for the code to run with the new version of the .NET Framework, make those changes. In the Build menu, click Build. Any missing references or other compilation issues in the project are displayed in the Error List window. The most likely issues are missing assembly references or issues with dynamically generated types. In Solution Explorer, right-click the Web page that will be used to launch the application, and then click Set as Start Page. On the Debug menu, click Start Debugging. If debugging is not enabled, the Debugging Not Enabled dialog box is displayed. Select the option to add a Web.config file that has debugging enabled, and then click OK. Verify that the converted project runs as expected. Do not continue with the conversion process until all build and run-time errors are resolved. Converting ASP.NET Code Files    ASP.NET Web page files and user-control files in Visual Studio 2008 that use the code-behind model have an associated designer file. The files that you just converted will have an associated code-behind file, but no designer file. Therefore, the next step is to generate designer files. NoteOnly ASP.NET Web pages and user controls that have their code in a separate code file require a separate designer file. For pages that have inline code and no associated code file, no designer file will be generated.To convert ASP.NET code files In Solution Explorer, right-click the project node, and then click Convert To Web Application. The files are converted. Verify that the converted code files have a code file and a designer file. Build and run the project to verify the results of the conversion.

    Read the article

  • Using Microsoft Office 2007 with E-Business Suite Release 12

    - by Steven Chan
    Many products in the Oracle E-Business Suite offer optional integrations with Microsoft Office and Microsoft Projects.  For example, some EBS products can export tabular reports to Microsoft Excel.  Some EBS products integrate directly with Microsoft products, and others work through the Applications Desktop Integrator (WebADI and ADI) as an intermediary.These EBS integrations have historically been documented in their respective product-specific documentation.  In other words, if an EBS product in the Oracle Financials family supported an integration with, say, Microsoft Excel, it was up to the product team to document that in the Oracle Financials documentation.Some EBS systems administrators have found the process of hunting through the various product-specific documents for Office-related information to be a bit difficult.  In response to your Service Requests and emails, we've released a new document that consolidates and summarises all patching and configuration requirements for EBS products with MS Office integration points in a single place:Using Microsoft Office 2007 with Oracle E-Business Suite 11i and R12 (Note 1072807.1)

    Read the article

  • Creating an ASP.NET report using Visual Studio 2010 - Part 1

    - by rajbk
    This tutorial walks you through creating an report based on the Northwind sample database. You will add a client report definition file (RDLC), create a dataset for the RDLC, define queries using LINQ to Entities, design the report and add a ReportViewer web control to render the report in a ASP.NET web page. The report will have a chart control. Different results will be generated by changing filter criteria. At the end of the walkthrough, you should have a UI like the following.  From the UI below, a user is able to view the product list and can see a chart with the sum of Unit price for a given category. They can filter by Category and Supplier. The drop downs will auto post back when the selection is changed.  This demo uses Visual Studio 2010 RTM. This post is split into three parts. The last part has the sample code attached. Creating an ASP.NET report using Visual Studio 2010 - Part 2 Creating an ASP.NET report using Visual Studio 2010 - Part 3   Lets start by creating a new ASP.NET empty web application called “NorthwindReports” Creating the Data Access Layer (DAL) Add a web form called index.aspx to the root directory. You do this by right clicking on the NorthwindReports web project and selecting “Add item..” . Create a folder called “DAL”. We will store all our data access methods and any data transfer objects in here.   Right click on the DAL folder and add a ADO.NET Entity data model called Northwind. Select “Generate from database” and click Next. Create a connection to your database containing the Northwind sample database and click Next.   From the table list, select Categories, Products and Suppliers and click next. Our Entity data model gets created and looks like this:    Adding data transfer objects Right click on the DAL folder and add a ProductViewModel. Add the following code. This class contains properties we need to render our report. public class ProductViewModel { public int? ProductID { get; set; } public string ProductName { get; set; } public System.Nullable<decimal> UnitPrice { get; set; } public string CategoryName { get; set; } public int? CategoryID { get; set; } public int? SupplierID { get; set; } public bool Discontinued { get; set; } } Add a SupplierViewModel class. This will be used to render the supplier DropDownlist. public class SupplierViewModel { public string CompanyName { get; set; } public int SupplierID { get; set; } } Add a CategoryViewModel class. public class CategoryViewModel { public string CategoryName { get; set; } public int CategoryID { get; set; } } Create an IProductRepository interface. This will contain the signatures of all the methods we need when accessing the entity model.  This step is not needed but follows the repository pattern. interface IProductRepository { IQueryable<Product> GetProducts(); IQueryable<ProductViewModel> GetProductsProjected(int? supplierID, int? categoryID); IQueryable<SupplierViewModel> GetSuppliers(); IQueryable<CategoryViewModel> GetCategories(); } Create a ProductRepository class that implements the IProductReposity above. The methods available in this class are as follows: GetProducts – returns an IQueryable of all products. GetProductsProjected – returns an IQueryable of ProductViewModel. The method filters all the products based on SupplierId and CategoryId if any. It then projects the result into the ProductViewModel. GetSuppliers() – returns an IQueryable of all suppliers projected into a SupplierViewModel GetCategories() – returns an IQueryable of all categories projected into a CategoryViewModel  public class ProductRepository : IProductRepository { /// <summary> /// IQueryable of all Products /// </summary> /// <returns></returns> public IQueryable<Product> GetProducts() { var dataContext = new NorthwindEntities(); var products = from p in dataContext.Products select p; return products; }   /// <summary> /// IQueryable of Projects projected /// into the ProductViewModel class /// </summary> /// <returns></returns> public IQueryable<ProductViewModel> GetProductsProjected(int? supplierID, int? categoryID) { var projectedProducts = from p in GetProducts() select new ProductViewModel { ProductID = p.ProductID, ProductName = p.ProductName, UnitPrice = p.UnitPrice, CategoryName = p.Category.CategoryName, CategoryID = p.CategoryID, SupplierID = p.SupplierID, Discontinued = p.Discontinued }; // Filter on SupplierID if (supplierID.HasValue) { projectedProducts = projectedProducts.Where(a => a.SupplierID == supplierID); }   // Filter on CategoryID if (categoryID.HasValue) { projectedProducts = projectedProducts.Where(a => a.CategoryID == categoryID); }   return projectedProducts; }     public IQueryable<SupplierViewModel> GetSuppliers() { var dataContext = new NorthwindEntities(); var suppliers = from s in dataContext.Suppliers select new SupplierViewModel { SupplierID = s.SupplierID, CompanyName = s.CompanyName }; return suppliers; }   public IQueryable<CategoryViewModel> GetCategories() { var dataContext = new NorthwindEntities(); var categories = from c in dataContext.Categories select new CategoryViewModel { CategoryID = c.CategoryID, CategoryName = c.CategoryName }; return categories; } } Your solution explorer should look like the following. Build your project and make sure you don’t get any errors. In the next part, we will see how to create the client report definition file using the Report Wizard.   Creating an ASP.NET report using Visual Studio 2010 - Part 2

    Read the article

  • MvcExtensions – Bootstrapping

    - by kazimanzurrashid
    When you create a new ASP.NET MVC application you will find that the global.asax contains the following lines: namespace MvcApplication1 { // Note: For instructions on enabling IIS6 or IIS7 classic mode, // visit http://go.microsoft.com/?LinkId=9394801 public class MvcApplication : System.Web.HttpApplication { public static void RegisterRoutes(RouteCollection routes) { routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.MapRoute( "Default", // Route name "{controller}/{action}/{id}", // URL with parameters new { controller = "Home", action = "Index", id = UrlParameter.Optional } // Parameter defaults ); } protected void Application_Start() { AreaRegistration.RegisterAllAreas(); RegisterRoutes(RouteTable.Routes); } } } As the application grows, there are quite a lot of plumbing code gets into the global.asax which quickly becomes a design smell. Lets take a quick look at the code of one of the open source project that I recently visited: public static void RegisterRoutes(RouteCollection routes) { routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.MapRoute("Default","{controller}/{action}/{id}", new { controller = "Home", action = "Index", id = "" }); } protected override void OnApplicationStarted() { Error += OnError; EndRequest += OnEndRequest; var settings = new SparkSettings() .AddNamespace("System") .AddNamespace("System.Collections.Generic") .AddNamespace("System.Web.Mvc") .AddNamespace("System.Web.Mvc.Html") .AddNamespace("MvcContrib.FluentHtml") .AddNamespace("********") .AddNamespace("********.Web") .SetPageBaseType("ApplicationViewPage") .SetAutomaticEncoding(true); #if DEBUG settings.SetDebug(true); #endif var viewFactory = new SparkViewFactory(settings); ViewEngines.Engines.Add(viewFactory); #if !DEBUG PrecompileViews(viewFactory); #endif RegisterAllControllersIn("********.Web"); log4net.Config.XmlConfigurator.Configure(); RegisterRoutes(RouteTable.Routes); Factory.Load(new Components.WebDependencies()); ModelBinders.Binders.DefaultBinder = new Binders.GenericBinderResolver(Factory.TryGet<IModelBinder>); ValidatorConfiguration.Initialize("********"); HtmlValidationExtensions.Initialize(ValidatorConfiguration.Rules); } private void OnEndRequest(object sender, System.EventArgs e) { if (((HttpApplication)sender).Context.Handler is MvcHandler) { CreateKernel().Get<ISessionSource>().Close(); } } private void OnError(object sender, System.EventArgs e) { CreateKernel().Get<ISessionSource>().Close(); } protected override IKernel CreateKernel() { return Factory.Kernel; } private static void PrecompileViews(SparkViewFactory viewFactory) { var batch = new SparkBatchDescriptor(); batch.For<HomeController>().For<ManageController>(); viewFactory.Precompile(batch); } As you can see there are quite a few of things going on in the above code, Registering the ViewEngine, Compiling the Views, Registering the Routes/Controllers/Model Binders, Settings up Logger, Validations and as you can imagine the more it becomes complex the more things will get added in the application start. One of the goal of the MVCExtensions is to reduce the above design smell. Instead of writing all the plumbing code in the application start, it contains BootstrapperTask to register individual services. Out of the box, it contains BootstrapperTask to register Controllers, Controller Factory, Action Invoker, Action Filters, Model Binders, Model Metadata/Validation Providers, ValueProvideraFactory, ViewEngines etc and it is intelligent enough to automatically detect the above types and register into the ASP.NET MVC Framework. Other than the built-in tasks you can create your own custom task which will be automatically executed when the application starts. When the BootstrapperTasks are in action you will find the global.asax pretty much clean like the following: public class MvcApplication : UnityMvcApplication { public void ErrorLog_Filtering(object sender, ExceptionFilterEventArgs e) { Check.Argument.IsNotNull(e, "e"); HttpException exception = e.Exception.GetBaseException() as HttpException; if ((exception != null) && (exception.GetHttpCode() == (int)HttpStatusCode.NotFound)) { e.Dismiss(); } } } The above code is taken from my another open source project Shrinkr, as you can see the global.asax is longer cluttered with any plumbing code. One special thing you have noticed that it is inherited from the UnityMvcApplication rather than regular HttpApplication. There are separate version of this class for each IoC Container like NinjectMvcApplication, StructureMapMvcApplication etc. Other than executing the built-in tasks, the Shrinkr also has few custom tasks which gets executed when the application starts. For example, when the application starts, we want to ensure that the default users (which is specified in the web.config) are created. The following is the custom task that is used to create those default users: public class CreateDefaultUsers : BootstrapperTask { protected override TaskContinuation ExecuteCore(IServiceLocator serviceLocator) { IUserRepository userRepository = serviceLocator.GetInstance<IUserRepository>(); IUnitOfWork unitOfWork = serviceLocator.GetInstance<IUnitOfWork>(); IEnumerable<User> users = serviceLocator.GetInstance<Settings>().DefaultUsers; bool shouldCommit = false; foreach (User user in users) { if (userRepository.GetByName(user.Name) == null) { user.AllowApiAccess(ApiSetting.InfiniteLimit); userRepository.Add(user); shouldCommit = true; } } if (shouldCommit) { unitOfWork.Commit(); } return TaskContinuation.Continue; } } There are several other Tasks in the Shrinkr that we are also using which you will find in that project. To create a custom bootstrapping task you have create a new class which either implements the IBootstrapperTask interface or inherits from the abstract BootstrapperTask class, I would recommend to start with the BootstrapperTask as it already has the required code that you have to write in case if you choose the IBootstrapperTask interface. As you can see in the above code we are overriding the ExecuteCore to create the default users, the MVCExtensions is responsible for populating the  ServiceLocator prior calling this method and in this method we are using the service locator to get the dependencies that are required to create the users (I will cover the custom dependencies registration in the next post). Once the users are created, we are returning a special enum, TaskContinuation as the return value, the TaskContinuation can have three values Continue (default), Skip and Break. The reason behind of having this enum is, in some  special cases you might want to skip the next task in the chain or break the complete chain depending upon the currently running task, in those cases you will use the other two values instead of the Continue. The last thing I want to cover in the bootstrapping task is the Order. By default all the built-in tasks as well as newly created task order is set to the DefaultOrder(a static property), in some special cases you might want to execute it before/after all the other tasks, in those cases you will assign the Order in the Task constructor. For Example, in Shrinkr, we want to run few background services when the all the tasks are executed, so we assigned the order as DefaultOrder + 1. Here is the code of that Task: public class ConfigureBackgroundServices : BootstrapperTask { private IEnumerable<IBackgroundService> backgroundServices; public ConfigureBackgroundServices() { Order = DefaultOrder + 1; } protected override TaskContinuation ExecuteCore(IServiceLocator serviceLocator) { backgroundServices = serviceLocator.GetAllInstances<IBackgroundService>().ToList(); backgroundServices.Each(service => service.Start()); return TaskContinuation.Continue; } protected override void DisposeCore() { backgroundServices.Each(service => service.Stop()); } } That’s it for today, in the next post I will cover the custom service registration, so stay tuned.

    Read the article

  • JavaScript Data Binding Frameworks

    - by dwahlin
    Data binding is where it’s at now days when it comes to building client-centric Web applications. Developers experienced with desktop frameworks like WPF or web frameworks like ASP.NET, Silverlight, or others are used to being able to take model objects containing data and bind them to UI controls quickly and easily. When moving to client-side Web development the data binding story hasn’t been great since neither HTML nor JavaScript natively support data binding. This means that you have to write code to place data in a control and write code to extract it. Although it’s certainly feasible to do it from scratch (many of us have done it this way for years), it’s definitely tedious and not exactly the best solution when it comes to maintenance and re-use. Over the last few years several different script libraries have been released to simply the process of binding data to HTML controls. In fact, the subject of data binding is becoming so popular that it seems like a new script library is being released nearly every week. Many of the libraries provide MVC/MVVM pattern support in client-side JavaScript apps and some even integrate directly with server frameworks like Node.js. Here’s a quick list of a few of the available libraries that support data binding (if you like any others please add a comment and I’ll try to keep the list updated): AngularJS MVC framework for data binding (although closely follows the MVVM pattern). Backbone.js MVC framework with support for models, key/value binding, custom events, and more. Derby Provides a real-time environment that runs in the browser an in Node.js. The library supports data binding and templates. Ember Provides support for templates that automatically update as data changes. JsViews Data binding framework that provides “interactive data-driven views built on top of JsRender templates”. jQXB Expression Binder Lightweight jQuery plugin that supports bi-directional data binding support. KnockoutJS MVVM framework with robust support for data binding. For an excellent look at using KnockoutJS check out John Papa’s course on Pluralsight. Meteor End to end framework that uses Node.js on the server and provides support for data binding on  the client. Simpli5 JavaScript framework that provides support for two-way data binding. WinRT with HTML5/JavaScript If you’re building Windows 8 applications using HTML5 and JavaScript there’s built-in support for data binding in the WinJS library.   I won’t have time to write about each of these frameworks, but in the next post I’m going to talk about my (current) favorite when it comes to client-side JavaScript data binding libraries which is AngularJS. AngularJS provides an extremely clean way – in my opinion - to extend HTML syntax to support data binding while keeping model objects (the objects that hold the data) free from custom framework method calls or other weirdness. While I’m writing up the next post, feel free to visit the AngularJS developer guide if you’d like additional details about the API and want to get started using it.

    Read the article

  • Can't run Eclipse after installing ADT Plugin

    - by user89439
    So, I've installed the ADT Plugin, run a HelloWorld, restart my computer and after that the Eclipse can't run. A message appear: "An error has ocurred. See the log file: /home/todi (...)" Here is the log file: !SESSION 2011-07-26 22:51:59.381 ----------------------------------------------- eclipse.buildId=I20110613-1736 java.version=1.6.0_26 java.vendor=Sun Microsystems Inc. BootLoader constants: OS=win32, ARCH=x86, WS=win32, NL=pt_BR Framework arguments: -product org.eclipse.epp.package.java.product Command-line arguments: -os win32 -ws win32 -arch x86 -product org.eclipse.epp.package.java.product !ENTRY org.eclipse.update.configurator 4 0 2011-07-26 22:57:34.135 !MESSAGE Could not rename configuration temp file !ENTRY org.eclipse.update.configurator 4 0 2011-07-26 22:57:34.157 !MESSAGE Unable to save configuration file "C:\Program Files\eclipse\configuration\org.eclipse.update\platform.xml.tmp" !STACK 0 java.io.IOException: Unable to save configuration file "C:\Program Files\eclipse\configuration\org.eclipse.update\platform.xml.tmp" at org.eclipse.update.internal.configurator.PlatformConfiguration.save(PlatformConfiguration.java:690) at org.eclipse.update.internal.configurator.PlatformConfiguration.save(PlatformConfiguration.java:574) at org.eclipse.update.internal.configurator.PlatformConfiguration.startup(PlatformConfiguration.java:714) at org.eclipse.update.internal.configurator.ConfigurationActivator.getPlatformConfiguration(ConfigurationActivator.java:404) at org.eclipse.update.internal.configurator.ConfigurationActivator.initialize(ConfigurationActivator.java:136) at org.eclipse.update.internal.configurator.ConfigurationActivator.start(ConfigurationActivator.java:69) at org.eclipse.osgi.framework.internal.core.BundleContextImpl$1.run(BundleContextImpl.java:711) at java.security.AccessController.doPrivileged(Native Method) at org.eclipse.osgi.framework.internal.core.BundleContextImpl.startActivator(BundleContextImpl.java:702) at org.eclipse.osgi.framework.internal.core.BundleContextImpl.start(BundleContextImpl.java:683) at org.eclipse.osgi.framework.internal.core.BundleHost.startWorker(BundleHost.java:381) at org.eclipse.osgi.framework.internal.core.AbstractBundle.start(AbstractBundle.java:299) at org.eclipse.osgi.framework.util.SecureAction.start(SecureAction.java:440) at org.eclipse.osgi.internal.loader.BundleLoader.setLazyTrigger(BundleLoader.java:268) at org.eclipse.core.runtime.internal.adaptor.EclipseLazyStarter.postFindLocalClass(EclipseLazyStarter.java:107) at org.eclipse.osgi.baseadaptor.loader.ClasspathManager.findLocalClass(ClasspathManager.java:462) at org.eclipse.osgi.internal.baseadaptor.DefaultClassLoader.findLocalClass(DefaultClassLoader.java:216) at org.eclipse.osgi.internal.loader.BundleLoader.findLocalClass(BundleLoader.java:400) at org.eclipse.osgi.internal.loader.BundleLoader.findClassInternal(BundleLoader.java:476) at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:429) at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:417) at org.eclipse.osgi.internal.baseadaptor.DefaultClassLoader.loadClass(DefaultClassLoader.java:107) at java.lang.ClassLoader.loadClass(Unknown Source) at org.eclipse.osgi.internal.loader.BundleLoader.loadClass(BundleLoader.java:345) at org.eclipse.osgi.framework.internal.core.BundleHost.loadClass(BundleHost.java:229) at org.eclipse.osgi.framework.internal.core.AbstractBundle.loadClass(AbstractBundle.java:1207) at org.eclipse.equinox.internal.ds.model.ServiceComponent.createInstance(ServiceComponent.java:480) at org.eclipse.equinox.internal.ds.model.ServiceComponentProp.createInstance(ServiceComponentProp.java:271) at org.eclipse.equinox.internal.ds.model.ServiceComponentProp.build(ServiceComponentProp.java:332) at org.eclipse.equinox.internal.ds.InstanceProcess.buildComponent(InstanceProcess.java:588) at org.eclipse.equinox.internal.ds.ServiceReg.getService(ServiceReg.java:53) at org.eclipse.osgi.internal.serviceregistry.ServiceUse$1.run(ServiceUse.java:138) at java.security.AccessController.doPrivileged(Native Method) at org.eclipse.osgi.internal.serviceregistry.ServiceUse.getService(ServiceUse.java:136) at org.eclipse.osgi.internal.serviceregistry.ServiceRegistrationImpl.getService(ServiceRegistrationImpl.java:468) at org.eclipse.osgi.internal.serviceregistry.ServiceRegistry.getService(ServiceRegistry.java:467) at org.eclipse.osgi.framework.internal.core.BundleContextImpl.getService(BundleContextImpl.java:594) at org.osgi.util.tracker.ServiceTracker.addingService(ServiceTracker.java:450) at org.osgi.util.tracker.ServiceTracker$Tracked.customizerAdding(ServiceTracker.java:980) at org.osgi.util.tracker.ServiceTracker$Tracked.customizerAdding(ServiceTracker.java:1) at org.osgi.util.tracker.AbstractTracked.trackAdding(AbstractTracked.java:262) at org.osgi.util.tracker.AbstractTracked.trackInitial(AbstractTracked.java:185) at org.osgi.util.tracker.ServiceTracker.open(ServiceTracker.java:348) at org.osgi.util.tracker.ServiceTracker.open(ServiceTracker.java:283) at org.eclipse.core.internal.runtime.InternalPlatform.getBundleGroupProviders(InternalPlatform.java:225) at org.eclipse.core.runtime.Platform.getBundleGroupProviders(Platform.java:1261) at org.eclipse.ui.internal.ide.IDEWorkbenchPlugin.getFeatureInfos(IDEWorkbenchPlugin.java:291) at org.eclipse.ui.internal.ide.WorkbenchActionBuilder.makeFeatureDependentActions(WorkbenchActionBuilder.java:1217) at org.eclipse.ui.internal.ide.WorkbenchActionBuilder.makeActions(WorkbenchActionBuilder.java:1026) at org.eclipse.ui.application.ActionBarAdvisor.fillActionBars(ActionBarAdvisor.java:147) at org.eclipse.ui.internal.ide.WorkbenchActionBuilder.fillActionBars(WorkbenchActionBuilder.java:341) at org.eclipse.ui.internal.WorkbenchWindow.fillActionBars(WorkbenchWindow.java:3564) at org.eclipse.ui.internal.WorkbenchWindow.(WorkbenchWindow.java:419) at org.eclipse.ui.internal.tweaklets.Workbench3xImplementation.createWorkbenchWindow(Workbench3xImplementation.java:31) at org.eclipse.ui.internal.Workbench.newWorkbenchWindow(Workbench.java:1920) at org.eclipse.ui.internal.Workbench.access$14(Workbench.java:1918) at org.eclipse.ui.internal.Workbench$68.runWithException(Workbench.java:3658) at org.eclipse.ui.internal.StartupThreading$StartupRunnable.run(StartupThreading.java:31) at org.eclipse.swt.widgets.RunnableLock.run(RunnableLock.java:35) at org.eclipse.swt.widgets.Synchronizer.runAsyncMessages(Synchronizer.java:135) at org.eclipse.swt.widgets.Display.runAsyncMessages(Display.java:4140) at org.eclipse.swt.widgets.Display.readAndDispatch(Display.java:3757) at org.eclipse.ui.application.WorkbenchAdvisor.openWindows(WorkbenchAdvisor.java:803) at org.eclipse.ui.internal.Workbench$33.runWithException(Workbench.java:1595) at org.eclipse.ui.internal.StartupThreading$StartupRunnable.run(StartupThreading.java:31) at org.eclipse.swt.widgets.RunnableLock.run(RunnableLock.java:35) at org.eclipse.swt.widgets.Synchronizer.runAsyncMessages(Synchronizer.java:135) at org.eclipse.swt.widgets.Display.runAsyncMessages(Display.java:4140) at org.eclipse.swt.widgets.Display.readAndDispatch(Display.java:3757) at org.eclipse.ui.internal.Workbench.runUI(Workbench.java:2604) at org.eclipse.ui.internal.Workbench.access$4(Workbench.java:2494) at org.eclipse.ui.internal.Workbench$7.run(Workbench.java:674) at org.eclipse.core.databinding.observable.Realm.runWithDefault(Realm.java:332) at org.eclipse.ui.internal.Workbench.createAndRunWorkbench(Workbench.java:667) at org.eclipse.ui.PlatformUI.createAndRunWorkbench(PlatformUI.java:149) at org.eclipse.ui.internal.ide.application.IDEApplication.start(IDEApplication.java:123) at org.eclipse.equinox.internal.app.EclipseAppHandle.run(EclipseAppHandle.java:196) at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.runApplication(EclipseAppLauncher.java:110) at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.start(EclipseAppLauncher.java:79) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:344) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:179) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at org.eclipse.equinox.launcher.Main.invokeFramework(Main.java:622) at org.eclipse.equinox.launcher.Main.basicRun(Main.java:577) at org.eclipse.equinox.launcher.Main.run(Main.java:1410) !ENTRY org.eclipse.equinox.p2.operations 4 0 2011-07-27 00:15:28.049 !MESSAGE Operation details !SUBENTRY 1 org.eclipse.equinox.p2.director 4 1 2011-07-27 00:15:28.049 !MESSAGE Cannot complete the install because some dependencies are not satisfiable !SUBENTRY 2 org.eclipse.equinox.p2.director 4 0 2011-07-27 00:15:28.049 !MESSAGE org.eclipse.linuxtools.callgraph.feature.group [0.0.2.201106060936] cannot be installed in this environment because its filter is not applicable. !ENTRY org.eclipse.equinox.p2.operations 4 0 2011-07-27 00:15:28.644 !MESSAGE Operation details !SUBENTRY 1 org.eclipse.equinox.p2.director 4 1 2011-07-27 00:15:28.644 !MESSAGE Cannot complete the install because some dependencies are not satisfiable !SUBENTRY 2 org.eclipse.equinox.p2.director 4 0 2011-07-27 00:15:28.644 !MESSAGE org.eclipse.linuxtools.callgraph.feature.group [0.0.2.201106060936] cannot be installed in this environment because its filter is not applicable. !ENTRY org.eclipse.equinox.p2.operations 4 0 2011-07-27 00:27:35.152 !MESSAGE Operation details !SUBENTRY 1 org.eclipse.equinox.p2.director 4 1 2011-07-27 00:27:35.158 !MESSAGE Cannot complete the install because some dependencies are not satisfiable !SUBENTRY 2 org.eclipse.equinox.p2.director 4 0 2011-07-27 00:27:35.159 !MESSAGE org.eclipse.linuxtools.callgraph.feature.group [0.0.2.201106060936] cannot be installed in this environment because its filter is not applicable. !ENTRY org.eclipse.equinox.p2.operations 4 0 2011-07-27 00:27:35.215 !MESSAGE Operation details !SUBENTRY 1 org.eclipse.equinox.p2.director 4 1 2011-07-27 00:27:35.216 !MESSAGE Cannot complete the install because some dependencies are not satisfiable !SUBENTRY 2 org.eclipse.equinox.p2.director 4 0 2011-07-27 00:27:35.216 !MESSAGE org.eclipse.linuxtools.callgraph.feature.group [0.0.2.201106060936] cannot be installed in this environment because its filter is not applicable. !ENTRY org.eclipse.equinox.p2.operations 4 0 2011-07-27 01:07:17.988 !MESSAGE Operation details !SUBENTRY 1 org.eclipse.equinox.p2.director 4 1 2011-07-27 01:07:18.006 !MESSAGE Cannot complete the install because some dependencies are not satisfiable !SUBENTRY 2 org.eclipse.equinox.p2.director 4 0 2011-07-27 01:07:18.006 !MESSAGE org.eclipse.linuxtools.callgraph.feature.group [0.0.2.201106060936] cannot be installed in this environment because its filter is not applicable. !ENTRY org.eclipse.equinox.p2.operations 4 0 2011-07-27 01:07:19.847 !MESSAGE Operation details !SUBENTRY 1 org.eclipse.equinox.p2.director 4 1 2011-07-27 01:07:19.848 !MESSAGE Cannot complete the install because some dependencies are not satisfiable !SUBENTRY 2 org.eclipse.equinox.p2.director 4 0 2011-07-27 01:07:19.848 !MESSAGE org.eclipse.linuxtools.callgraph.feature.group [0.0.2.201106060936] cannot be installed in this environment because its filter is not applicable. I don't understand how the path windows like has appeared... if anyone knows how to solve this, I'll appreciate! Thank you for all your answers! Best regards, Alexandre Ferreira.

    Read the article

  • Setting PIdgin up for Lync2013

    - by Stu2000
    I having difficulty setting up pidgin to work with my company's microsoft 365's communicator lync 2013 (not 2010) account. I either receive a message stating authentication failed, or Incompatible authentication scheme chosen: NTLM depending upon the user agent values used from this wiki It appears that both the user agent values that start with UCCAPI provide authentication failed error, which I'm guessing is "closer" to the solution. I have triple checked that the password is correct. Below are some images of my settings (I have changed the company name to "company" for annonymity. I am running pidgin with a script in order to fix a write error issue: export NSS_SSL_CBC_RANDOM_IV=0 pidgin -d I am also using the latest version of SIPE (1.10.1) by using this ppa: https://launchpad.net/~aavelar/+archive/ppa What settings do I need to change/add to get it to work?

    Read the article

  • 3 tier architecture in objective-c

    - by hba
    I just finished reading the objective-c developer handbook from apple. So I pretty much know everything that there is to know about objective-c (hee hee hee). I was wondering how do I go about designing a 3-tier application in objective-c. The 3-tiers being a front-end application in Cocoa, a middle-tier in pure objective-c and a back-end (data access also in objective-c and mysql db). I'm not really interested in discussing why I'd need a 3-tier architecture, I'd like to narrow the discussion to the 'how'. For example, can I have 3 separate x-code projects one for each tier? If so how would I link the projects together. In java, I can export each tier as a jar file and form the proper associations.

    Read the article

  • Oracle Data Mining a Star Schema: Telco Churn Case Study

    - by charlie.berger
    There is a complete and detailed Telco Churn case study "How to" Blog Series just posted by Ari Mozes, ODM Dev. Manager.  In it, Ari provides detailed guidance in how to leverage various strengths of Oracle Data Mining including the ability to: mine Star Schemas and join tables and views together to obtain a complete 360 degree view of a customer combine transactional data e.g. call record detail (CDR) data, etc. define complex data transformation, model build and model deploy analytical methodologies inside the Database  His blog is posted in a multi-part series.  Below are some opening excerpts for the first 3 blog entries.  This is an excellent resource for any novice to skilled data miner who wants to gain competitive advantage by mining their data inside the Oracle Database.  Many thanks Ari! Mining a Star Schema: Telco Churn Case Study (1 of 3) One of the strengths of Oracle Data Mining is the ability to mine star schemas with minimal effort.  Star schemas are commonly used in relational databases, and they often contain rich data with interesting patterns.  While dimension tables may contain interesting demographics, fact tables will often contain user behavior, such as phone usage or purchase patterns.  Both of these aspects - demographics and usage patterns - can provide insight into behavior.Churn is a critical problem in the telecommunications industry, and companies go to great lengths to reduce the churn of their customer base.  One case study1 describes a telecommunications scenario involving understanding, and identification of, churn, where the underlying data is present in a star schema.  That case study is a good example for demonstrating just how natural it is for Oracle Data Mining to analyze a star schema, so it will be used as the basis for this series of posts...... Mining a Star Schema: Telco Churn Case Study (2 of 3) This post will follow the transformation steps as described in the case study, but will use Oracle SQL as the means for preparing data.  Please see the previous post for background material, including links to the case study and to scripts that can be used to replicate the stages in these posts.1) Handling missing values for call data recordsThe CDR_T table records the number of phone minutes used by a customer per month and per call type (tariff).  For example, the table may contain one record corresponding to the number of peak (call type) minutes in January for a specific customer, and another record associated with international calls in March for the same customer.  This table is likely to be fairly dense (most type-month combinations for a given customer will be present) due to the coarse level of aggregation, but there may be some missing values.  Missing entries may occur for a number of reasons: the customer made no calls of a particular type in a particular month, the customer switched providers during the timeframe, or perhaps there is a data entry problem.  In the first situation, the correct interpretation of a missing entry would be to assume that the number of minutes for the type-month combination is zero.  In the other situations, it is not appropriate to assume zero, but rather derive some representative value to replace the missing entries.  The referenced case study takes the latter approach.  The data is segmented by customer and call type, and within a given customer-call type combination, an average number of minutes is computed and used as a replacement value.In SQL, we need to generate additional rows for the missing entries and populate those rows with appropriate values.  To generate the missing rows, Oracle's partition outer join feature is a perfect fit.  select cust_id, cdre.tariff, cdre.month, minsfrom cdr_t cdr partition by (cust_id) right outer join     (select distinct tariff, month from cdr_t) cdre     on (cdr.month = cdre.month and cdr.tariff = cdre.tariff);   ....... Mining a Star Schema: Telco Churn Case Study (3 of 3) Now that the "difficult" work is complete - preparing the data - we can move to building a predictive model to help identify and understand churn.The case study suggests that separate models be built for different customer segments (high, medium, low, and very low value customer groups).  To reduce the data to a single segment, a filter can be applied: create or replace view churn_data_high asselect * from churn_prep where value_band = 'HIGH'; It is simple to take a quick look at the predictive aspects of the data on a univariate basis.  While this does not capture the more complex multi-variate effects as would occur with the full-blown data mining algorithms, it can give a quick feel as to the predictive aspects of the data as well as validate the data preparation steps.  Oracle Data Mining includes a predictive analytics package which enables quick analysis. begin  dbms_predictive_analytics.explain(   'churn_data_high','churn_m6','expl_churn_tab'); end; /select * from expl_churn_tab where rank <= 5 order by rank; ATTRIBUTE_NAME       ATTRIBUTE_SUBNAME EXPLANATORY_VALUE RANK-------------------- ----------------- ----------------- ----------LOS_BAND                                      .069167052          1MINS_PER_TARIFF_MON  PEAK-5                   .034881648          2REV_PER_MON          REV-5                    .034527798          3DROPPED_CALLS                                 .028110322          4MINS_PER_TARIFF_MON  PEAK-4                   .024698149          5From the above results, it is clear that some predictors do contain information to help identify churn (explanatory value > 0).  The strongest uni-variate predictor of churn appears to be the customer's (binned) length of service.  The second strongest churn indicator appears to be the number of peak minutes used in the most recent month.  The subname column contains the interior piece of the DM_NESTED_NUMERICALS column described in the previous post.  By using the object relational approach, many related predictors are included within a single top-level column. .....   NOTE:  These are just EXCERPTS.  Click here to start reading the Oracle Data Mining a Star Schema: Telco Churn Case Study from the beginning.    

    Read the article

  • Visio 2010 forward engineer add-in for office 2010

    - by Ryan Ternier
    I have been scouring the internet for ages trying to see if there was a usable add-on for Visio 2010 that could export SQL Scripts. MS stopping putting that functionality in Visio since 2003 – which is a huge shame. Today I found an open source project from Alberto Ferrari. It’s an add-in for Visio 2010 that allows you to generate SQL Scripts from your DB diagram. It’s still in beta, and the source is available.   Check it out here:http://sqlblog.com/blogs/alberto_ferrari/archive/2010/04/16/visio-forward-engineer-addin-for-office-2010.aspx This saves me from having to do all my diagramming in SQL Server / VS 2010. And brings back the much needed functionality that has been lost.

    Read the article

  • Download and Share Visual Studio Color Schemes

    As developers we often spend a large part of our day staring at code within Visual Studio.  If you are like me, after awhile the default VS text color scheme starts to get a little boring. The good news is that Visual Studio allows you to completely customize the editor background and text colors to whatever you want allowing you to tweak them to create the experience that is just right for your eyes and personality.  You can then optionally export/import your color scheme preferences...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Hosting StreamInsight applications using WCF

    - by gsusx
    One of the fundamental differentiators of Microsoft's StreamInsight compared to other Complex Event Processing (CEP) technologies is its flexible deployment model. In that sense, a StreamInsight solution can be hosted within an application or as a server component. This duality contrasts with most of the popular CEP frameworks in the current market which are almost exclusively server based. Whether it's undoubtedly that the ability of embedding a CEP engine in your applications opens new possibilities...(read more)

    Read the article

< Previous Page | 274 275 276 277 278 279 280 281 282 283 284 285  | Next Page >