Search Results

Search found 11632 results on 466 pages for 'to field'.

Page 199/466 | < Previous Page | 195 196 197 198 199 200 201 202 203 204 205 206  | Next Page >

  • NHibernate mapping error SQL Server 2008 Express

    - by developer
    Hi All, I tried an example from NHibernate in Action book and when I try to run the app, it throws an exception saying "Could not compile the mapping document: HelloNHibernate.Employee.hbm.xml" Below is my code, Employee.hbm.xml <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" auto-import="true"> <class name="HelloNHibernate.Employee, HelloNHibernate" lazy="false" table="Employee"> <id name="id" access="field"> <generator class="native"/> </id> <property name="name" access="field" column="name"/> <many-to-one access="field" name="manager" column="manager" cascade="all"/> </class> Program.cs using System; using System.Collections.Generic; using System.Linq; using System.Text; using NHibernate; using System.Reflection; using NHibernate.Cfg; namespace HelloNHibernate { class Program { static void Main(string[] args) { CreateEmployeeAndSaveToDatabase(); UpdateTobinAndAssignPierreHenriAsManager(); LoadEmployeesFromDatabase(); Console.WriteLine("Press any key to exit..."); Console.ReadKey(); } static void CreateEmployeeAndSaveToDatabase() { Employee tobin = new Employee(); tobin.name = "Tobin Harris"; using (ISession session = OpenSession()) { using (ITransaction transaction = session.BeginTransaction()) { session.Save(tobin); transaction.Commit(); } Console.WriteLine("Saved Tobin to the database"); } } static ISession OpenSession() { if (factory == null) { Configuration c = new Configuration(); c.AddAssembly(Assembly.GetCallingAssembly()); factory = c.BuildSessionFactory(); } return factory.OpenSession(); } static void LoadEmployeesFromDatabase() { using (ISession session = OpenSession()) { IQuery query = session.CreateQuery("from Employee as emp order by emp.name asc"); IList<Employee> foundEmployees = query.List<Employee>(); Console.WriteLine("\n{0} employees found:", foundEmployees.Count); foreach (Employee employee in foundEmployees) Console.WriteLine(employee.SayHello()); } } static void UpdateTobinAndAssignPierreHenriAsManager() { using (ISession session = OpenSession()) { using (ITransaction transaction = session.BeginTransaction()) { IQuery q = session.CreateQuery("from Employee where name='Tobin Harris'"); Employee tobin = q.List<Employee>()[0]; tobin.name = "Tobin David Harris"; Employee pierreHenri = new Employee(); pierreHenri.name = "Pierre Henri Kuate"; tobin.manager = pierreHenri; transaction.Commit(); Console.WriteLine("Updated Tobin and added Pierre Henri"); } } } static ISessionFactory factory; } } Employee.cs using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace HelloNHibernate { class Employee { public int id; public string name; public Employee manager; public string SayHello() { return string.Format("'Hello World!', said {0}.", name); } } } App.config <?xml version="1.0" encoding="utf-8" ?> <configuration> <configSections> <section name="hibernate-configuration" type="NHibernate.Cfg.ConfigurationSectionHandler,NHibernate"/> </configSections> <hibernate-configuration xmlns="urn:nhibernate-configuration-2.2"> <session-factory> <property name="connection.provider">NHibernate.Connection.DriverConnectionProvider</property> <property name="connection.driver_class">NHibernate.Driver.SqlClientDriver</property> <property name="connection.connection_string"> Data Source=SQLEXPRESS2008;Integrated Security=True; User ID=SQL2008;Password=;initial catalog=HelloNHibernate </property> <property name="dialect">NHibernate.Dialect.MsSql2008Dialect</property> <property name="show_sql">false</property> </session-factory> </hibernate-configuration> </configuration>

    Read the article

  • Problem with jQuery and ASP.Net MVC

    - by robert_d
    I have a problem with jQuery, here is how my web app works Search web page which contains a form and jQuery script posts data to Search() action in Home controller after user clicks button1 button. Search.aspx: <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage<GLSChecker.Models.WebGLSQuery>" %> <asp:Content ID="Content1" ContentPlaceHolderID="TitleContent" runat="server"> Title </asp:Content> <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> <h2>Search</h2> <% Html.EnableClientValidation(); %> <% using (Html.BeginForm()) {%> <fieldset> <div class="editor-label"> <%: Html.LabelFor(model => model.Url) %> </div> <div class="editor-field"> <%: Html.TextBoxFor(model => model.Url, new { size = "50" } ) %> <%: Html.ValidationMessageFor(model => model.Url) %> </div> <div class="editor-label"> <%: Html.LabelFor(model => model.Location) %> </div> <div class="editor-field"> <%: Html.TextBoxFor(model => model.Location, new { size = "50" } ) %> <%: Html.ValidationMessageFor(model => model.Location) %> </div> <div class="editor-label"> <%: Html.LabelFor(model => model.KeywordLines) %> </div> <div class="editor-field"> <%: Html.TextAreaFor(model => model.KeywordLines, 10, 60, null)%> <%: Html.ValidationMessageFor(model => model.KeywordLines)%> </div> <p> <input id ="button1" type="submit" value="Search" /> </p> </fieldset> <% } %> <script src="../../Scripts/jquery-1.4.1.js" type="text/javascript"></script> <script type="text/javascript"> jQuery("#button1").click(function (e) { window.setInterval(refreshResult, 5000); }); function refreshResult() { jQuery("#divResult").load("/Home/Refresh"); } </script> <div id="divResult"> </div> </asp:Content> [HttpPost] public ActionResult Search(WebGLSQuery queryToCreate) { if (!ModelState.IsValid) return View("Search"); queryToCreate.Remote_Address = HttpContext.Request.ServerVariables["REMOTE_ADDR"]; Session["Result"] = null; SearchKeywordLines(queryToCreate); Thread.Sleep(15000); return View("Search"); }//Search() After button1 button is clicked the above script from Search web page runs Search() action in controller runs for longer period of time. I simulate this in testing by putting Thread.Sleep(15000); in Search()action. 5 sec. after Submit button was pressed, the above jQuery script calls Refresh() action in Home controller. public ActionResult Refresh() { ViewData["Result"] = DateTime.Now; return PartialView(); } Refresh() renders this partial <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl" % <%= ViewData["Result"] % The problem is that in Internet Explorer 8 there is only one request to /Home/Refresh; in Firefox 3.6.3 all requests to /Home/Refresh are made but nothing is displayed on the web page. I would be grateful for helpful suggestions.

    Read the article

  • Dependency Injection with Spring/Junit/JPA

    - by Steve
    I'm trying to create JUnit tests for my JPA DAO classes, using Spring 2.5.6 and JUnit 4.8.1. My test case looks like this: @RunWith(SpringJUnit4ClassRunner.class) @ContextConfiguration(locations={"classpath:config/jpaDaoTestsConfig.xml"} ) public class MenuItem_Junit4_JPATest extends BaseJPATestCase { private ApplicationContext context; private InputStream dataInputStream; private IDataSet dataSet; @Resource private IMenuItemDao menuItemDao; @Test public void testFindAll() throws Exception { assertEquals(272, menuItemDao.findAll().size()); } ... Other test methods ommitted for brevity ... } I have the following in my jpaDaoTestsConfig.xml: <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:p="http://www.springframework.org/schema/p" xmlns:tx="http://www.springframework.org/schema/tx" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx.xsd"> <!-- uses the persistence unit defined in the META-INF/persistence.xml JPA configuration file --> <bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalEntityManagerFactoryBean"> <property name="persistenceUnitName" value="CONOPS_PU" /> </bean> <bean id="groupDao" class="mil.navy.ndms.conops.common.dao.impl.jpa.GroupDao" lazy-init="true" /> <bean id="permissionDao" class="mil.navy.ndms.conops.common.dao.impl.jpa.PermissionDao" lazy-init="true" /> <bean id="applicationUserDao" class="mil.navy.ndms.conops.common.dao.impl.jpa.ApplicationUserDao" lazy-init="true" /> <bean id="conopsUserDao" class="mil.navy.ndms.conops.common.dao.impl.jpa.ConopsUserDao" lazy-init="true" /> <bean id="menuItemDao" class="mil.navy.ndms.conops.common.dao.impl.jpa.MenuItemDao" lazy-init="true" /> <!-- enables interpretation of the @Required annotation to ensure that dependency injection actually occures --> <bean class="org.springframework.beans.factory.annotation.RequiredAnnotationBeanPostProcessor"/> <!-- enables interpretation of the @PersistenceUnit/@PersistenceContext annotations providing convenient access to EntityManagerFactory/EntityManager --> <bean class="org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor"/> <!-- transaction manager for use with a single JPA EntityManagerFactory for transactional data access to a single datasource --> <bean id="jpaTransactionManager" class="org.springframework.orm.jpa.JpaTransactionManager"> <property name="entityManagerFactory" ref="entityManagerFactory"/> </bean> <!-- enables interpretation of the @Transactional annotation for declerative transaction managment using the specified JpaTransactionManager --> <tx:annotation-driven transaction-manager="jpaTransactionManager" proxy-target-class="false"/> </beans> Now, when I try to run this, I get the following: SEVERE: Caught exception while allowing TestExecutionListener [org.springframework.test.context.support.DependencyInjectionTestExecutionListener@fa60fa6] to prepare test instance [null(mil.navy.ndms.conops.common.dao.impl.MenuItem_Junit4_JPATest)] org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'mil.navy.ndms.conops.common.dao.impl.MenuItem_Junit4_JPATest': Injection of resource fields failed; nested exception is java.lang.IllegalStateException: Specified field type [interface javax.persistence.EntityManagerFactory] is incompatible with resource type [javax.persistence.EntityManager] at org.springframework.context.annotation.CommonAnnotationBeanPostProcessor.postProcessAfterInstantiation(CommonAnnotationBeanPostProcessor.java:292) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:959) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.autowireBeanProperties(AbstractAutowireCapableBeanFactory.java:329) at org.springframework.test.context.support.DependencyInjectionTestExecutionListener.injectDependencies(DependencyInjectionTestExecutionListener.java:110) at org.springframework.test.context.support.DependencyInjectionTestExecutionListener.prepareTestInstance(DependencyInjectionTestExecutionListener.java:75) at org.springframework.test.context.TestContextManager.prepareTestInstance(TestContextManager.java:255) at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.createTest(SpringJUnit4ClassRunner.java:93) at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.invokeTestMethod(SpringJUnit4ClassRunner.java:130) at org.junit.internal.runners.JUnit4ClassRunner.runMethods(JUnit4ClassRunner.java:61) at org.junit.internal.runners.JUnit4ClassRunner$1.run(JUnit4ClassRunner.java:54) at org.junit.internal.runners.ClassRoadie.runUnprotected(ClassRoadie.java:34) at org.junit.internal.runners.ClassRoadie.runProtected(ClassRoadie.java:44) at org.junit.internal.runners.JUnit4ClassRunner.run(JUnit4ClassRunner.java:52) at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:45) at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:460) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:673) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:386) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:196) Caused by: java.lang.IllegalStateException: Specified field type [interface javax.persistence.EntityManagerFactory] is incompatible with resource type [javax.persistence.EntityManager] at org.springframework.beans.factory.annotation.InjectionMetadata$InjectedElement.checkResourceType(InjectionMetadata.java:159) at org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor$PersistenceElement.(PersistenceAnnotationBeanPostProcessor.java:559) at org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor$1.doWith(PersistenceAnnotationBeanPostProcessor.java:359) at org.springframework.util.ReflectionUtils.doWithFields(ReflectionUtils.java:492) at org.springframework.util.ReflectionUtils.doWithFields(ReflectionUtils.java:469) at org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor.findPersistenceMetadata(PersistenceAnnotationBeanPostProcessor.java:351) at org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor.postProcessMergedBeanDefinition(PersistenceAnnotationBeanPostProcessor.java:296) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyMergedBeanDefinitionPostProcessors(AbstractAutowireCapableBeanFactory.java:745) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:448) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory$1.run(AbstractAutowireCapableBeanFactory.java:409) at java.security.AccessController.doPrivileged(AccessController.java:219) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:380) at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:264) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:221) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:261) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:185) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:168) at org.springframework.context.annotation.CommonAnnotationBeanPostProcessor.autowireResource(CommonAnnotationBeanPostProcessor.java:435) at org.springframework.context.annotation.CommonAnnotationBeanPostProcessor.getResource(CommonAnnotationBeanPostProcessor.java:409) at org.springframework.context.annotation.CommonAnnotationBeanPostProcessor$ResourceElement.getResourceToInject(CommonAnnotationBeanPostProcessor.java:537) at org.springframework.beans.factory.annotation.InjectionMetadata$InjectedElement.inject(InjectionMetadata.java:180) at org.springframework.beans.factory.annotation.InjectionMetadata.injectFields(InjectionMetadata.java:105) at org.springframework.context.annotation.CommonAnnotationBeanPostProcessor.postProcessAfterInstantiation(CommonAnnotationBeanPostProcessor.java:289) ... 18 more It seems to be telling me that its attempting to store an EntityManager object into an EntityManagerFactory field, but I don't understand how or why. My DAO classes accept both an EntityManager and EntityManagerFactory via the @PersistenceContext attribute, and they work find if I load them up and run them without the @ContextConfiguration attribute (i.e. if I just use the XmlApplcationContext to load the DAO and the EntityManagerFactory directly in setUp ()). Any insights would be appreciated. Thanks. --Steve

    Read the article

  • ZIPLIB problem on opening zip files

    - by Ahmet vardar
    I am using this class to create zip <?php // vim: expandtab sw=4 ts=4 sts=4: class zipfile { var $datasec = array(); var $ctrl_dir = array(); var $eof_ctrl_dir = "\x50\x4b\x05\x06\x00\x00\x00\x00"; var $old_offset = 0; function unix2DosTime($unixtime = 0) { $timearray = ($unixtime == 0) ? getdate() : getdate($unixtime); if ($timearray['year'] < 1980) { $timearray['year'] = 1980; $timearray['mon'] = 1; $timearray['mday'] = 1; $timearray['hours'] = 0; $timearray['minutes'] = 0; $timearray['seconds'] = 0; } // end if return (($timearray['year'] - 1980) << 25) | ($timearray['mon'] << 21) | ($timearray['mday'] << 16) | ($timearray['hours'] << 11) | ($timearray['minutes'] << 5) | ($timearray['seconds'] >> 1); } // end of the 'unix2DosTime()' method function addFile($data, $name, $time = 0) { $name = str_replace('\\', '/', $name); $dtime = dechex($this->unix2DosTime($time)); $hexdtime = '\x' . $dtime[6] . $dtime[7] . '\x' . $dtime[4] . $dtime[5] . '\x' . $dtime[2] . $dtime[3] . '\x' . $dtime[0] . $dtime[1]; eval('$hexdtime = "' . $hexdtime . '";'); $fr = "\x50\x4b\x03\x04"; $fr .= "\x14\x00"; // ver needed to extract $fr .= "\x00\x00"; // gen purpose bit flag $fr .= "\x08\x00"; // compression method $fr .= $hexdtime; // last mod time and date // "local file header" segment $unc_len = strlen($data); $crc = crc32($data); $zdata = gzcompress($data); $zdata = substr(substr($zdata, 0, strlen($zdata) - 4), 2); // fix crc bug $c_len = strlen($zdata); $fr .= pack('V', $crc); // crc32 $fr .= pack('V', $c_len); // compressed filesize $fr .= pack('V', $unc_len); // uncompressed filesize $fr .= pack('v', strlen($name)); // length of filename $fr .= pack('v', 0); // extra field length $fr .= $name; // "file data" segment $fr .= $zdata; // "data descriptor" segment (optional but necessary if archive is not // served as file) $fr .= pack('V', $crc); // crc32 $fr .= pack('V', $c_len); // compressed filesize $fr .= pack('V', $unc_len); // uncompressed filesize // add this entry to array $this -> datasec[] = $fr; // now add to central directory record $cdrec = "\x50\x4b\x01\x02"; $cdrec .= "\x00\x00"; // version made by $cdrec .= "\x14\x00"; // version needed to extract $cdrec .= "\x00\x00"; // gen purpose bit flag $cdrec .= "\x08\x00"; // compression method $cdrec .= $hexdtime; // last mod time & date $cdrec .= pack('V', $crc); // crc32 $cdrec .= pack('V', $c_len); // compressed filesize $cdrec .= pack('V', $unc_len); // uncompressed filesize $cdrec .= pack('v', strlen($name) ); // length of filename $cdrec .= pack('v', 0 ); // extra field length $cdrec .= pack('v', 0 ); // file comment length $cdrec .= pack('v', 0 ); // disk number start $cdrec .= pack('v', 0 ); // internal file attributes $cdrec .= pack('V', 32 ); // external file attributes - 'archive' bit set $cdrec .= pack('V', $this -> old_offset ); // relative offset of local header $this -> old_offset += strlen($fr); $cdrec .= $name; // optional extra field, file comment goes here // save to central directory $this -> ctrl_dir[] = $cdrec; } // end of the 'addFile()' method function file() { $data = implode('', $this -> datasec); $ctrldir = implode('', $this -> ctrl_dir); return $data . $ctrldir . $this -> eof_ctrl_dir . pack('v', sizeof($this -> ctrl_dir)) . // total # of entries "on this disk" pack('v', sizeof($this -> ctrl_dir)) . // total # of entries overall pack('V', strlen($ctrldir)) . // size of central dir pack('V', strlen($data)) . // offset to start of central dir "\x00\x00"; // .zip file comment length } // end of the 'file()' method function addFiles($files ) { foreach($files as $file) { if (is_file($file)) //directory check { $data = implode("",file($file)); $this->addFile($data,$file); } } } function output($file) { $fp=fopen($file,"w"); fwrite($fp,$this->file()); fclose($fp); } } // end of the 'zipfile' class ?> It creates zip file but when i try to open it on Mac os x snow leopard and windows 7, it doesnt open. on mac i had this error: Error 1: operation not permitted Any idea ? thanks

    Read the article

  • ASP.net MVC 2.0 using the same form for adding and editing.

    - by Chevex
    I would like to use the same view for editing a blog post and adding a blog post. However, I'm having an issue with the ID. When adding a blog post, I have no need for an ID value to be posted. When model binding binds the form values to the BlogPost object in the controller, it will auto-generate the ID in entity framework entity. When I am editing a blog post I DO need a hidden form field to store the ID in so that it accompanies the next form post. Here is the view I have right now. <% using (Html.BeginForm("CommitEditBlogPost", "Admin")) { %> <% if (Model != null) { %> <%: Html.HiddenFor(x => x.Id)%> <% } %> Title:<br /> <%: Html.TextBoxFor(x => x.Title, new { Style = "Width: 90%;" })%> <br /> <br /> Summary:<br /> <%: Html.TextAreaFor(x => x.Summary, new { Style = "Width: 90%; Height: 50px;" }) %> <br /> <br /> Body:<br /> <%: Html.TextAreaFor(x => x.Body, new { Style = "Height: 250px; Width: 90%;" })%> <br /> <br /> <input type="submit" value="Submit" /> <% } %> Right now checking if the model is coming in NULL is a great way to know if I'm editing a blog post or adding one, because when I'm adding one it will be null as it hasn't been created yet. The problem comes in when there is an error and the entity is invalid. When the controller renders the form after an invalid model the Model != null evaluates to false, even though we are editing a post and there is clearly a model. If I render the hidden input field for ID when adding a post, I get an error stating that the ID can't be null. Any help is appreciated. EDIT: I went with OJ's answer for this question, however I discovered something that made me feel silly and I wanted to share it just in case anyone was having a similar issue. The page the adds/edits blogs does not even need a hidden field for id, ever. The reason is because when I go to add a blog I do a GET to this relative URL BlogProject/Admin/AddBlogPost This URL does not contain an ID and the action method just renders the page. The page does a POST to the same URL when adding the blog post. The incoming BlogPost entity has a null Id and is generated by EF during save changes. The same thing happens when I edit blog posts. The URL is BlogProject/Admin/EditBlogPost/{Id} This URL contains the id of the blog post and since the page is posting back to the exact same URL the id goes with the POST to the action method that executes the edit. The only problem I encountered with this is that the action methods cannot have identical signatures. [HttpGet] public ViewResult EditBlogPost(int Id) { } [HttpPost] public ViewResult EditBlogPost(int Id) { } The compiler will yell at you if you try to use these two methods above. It is far too convenient that the Id will be posted back when doing a Html.BeginForm() with no arguments for action or controller. So rather than change the name of the POST method I just modified the arguments to include a FormCollection. Like this: [HttpPost] public ViewResult EditBlogPost(int Id, FormCollection formCollection) { // You can then use formCollection as the IValueProvider for UpdateModel() // and TryUpdateModel() if you wish. I mean, you might as well use the // argument since you're taking it. } The formCollection variable is filled via model binding with the same content that Request.Form would be by default. You don't have to use this collection for UpdateModel() or TryUpdateModel() but I did just so I didn't feel like that collection was pointless since it really was just to make the method signature different from its GET counterpart. Thanks for the help guys!

    Read the article

  • Problem with jQuery and ASP.Net MVC 2

    - by robert_d
    I have a problem with jQuery, here is how my web app works Search.aspx web page which contains a form and jQuery script posts data to Search() action in Home controller after user clicks button1 button. Search.aspx: <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage<GLSChecker.Models.WebGLSQuery>" %> <asp:Content ID="Content1" ContentPlaceHolderID="TitleContent" runat="server"> Title </asp:Content> <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> <h2>Search</h2> <% Html.EnableClientValidation(); %> <% using (Html.BeginForm()) {%> <fieldset> <div class="editor-label"> <%: Html.LabelFor(model => model.Url) %> </div> <div class="editor-field"> <%: Html.TextBoxFor(model => model.Url, new { size = "50" } ) %> <%: Html.ValidationMessageFor(model => model.Url) %> </div> <div class="editor-label"> <%: Html.LabelFor(model => model.Location) %> </div> <div class="editor-field"> <%: Html.TextBoxFor(model => model.Location, new { size = "50" } ) %> <%: Html.ValidationMessageFor(model => model.Location) %> </div> <div class="editor-label"> <%: Html.LabelFor(model => model.KeywordLines) %> </div> <div class="editor-field"> <%: Html.TextAreaFor(model => model.KeywordLines, 10, 60, null)%> <%: Html.ValidationMessageFor(model => model.KeywordLines)%> </div> <p> <input id ="button1" type="submit" value="Search" /> </p> </fieldset> <% } %> <script src="../../Scripts/jquery-1.4.1.js" type="text/javascript"></script> <script type="text/javascript"> jQuery("#button1").click(function (e) { window.setInterval(refreshResult, 5000); }); function refreshResult() { jQuery("#divResult").load("/Home/Refresh"); } </script> <div id="divResult"> </div> </asp:Content> [HttpPost] public ActionResult Search(WebGLSQuery queryToCreate) { if (!ModelState.IsValid) return View("Search"); queryToCreate.Remote_Address = HttpContext.Request.ServerVariables["REMOTE_ADDR"]; Session["Result"] = null; SearchKeywordLines(queryToCreate); Thread.Sleep(15000); return View("Search"); }//Search() After button1 button is clicked the above script from Search.aspx web page runs. Search() action in controller runs for longer period of time. I simulate this in testing by putting Thread.Sleep(15000); in Search()action. 5 sec. after Submit button was pressed, the above jQuery script calls Refresh() action in Home controller. public ActionResult Refresh() { ViewData["Result"] = DateTime.Now; return PartialView(); } Refresh() renders this partial <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl" % <%= ViewData["Result"] % The problem is that in Internet Explorer 8 there is only one request to /Home/Refresh; in Firefox 3.6.3 all requests to /Home/Refresh are made but nothing is displayed on the web page. Another problem with Firefox is that requests to /Home/Refresh are made every second not every 5 seconds. I noticed that after I clear Firefox cache the script works well first time button1 is pressed, but after that it doesn't work. I would be grateful for helpful suggestions.

    Read the article

  • Get the property, as a string, from an Expression<Func<TModel,TProperty>>

    - by Jaxidian
    I use some strongly-typed expressions that get serialized to allow my UI code to have strongly-typed sorting and searching expressions. These are of type Expression<Func<TModel,TProperty>> and are used as such: SortOption.Field = (p => p.FirstName);. I've gotten this working perfectly for this simple case. The code that I'm using for parsing the "FirstName" property out of there is actually reusing some existing functionality in a third-party product that we use and it works great, until we start working with deeply-nested properties(SortOption.Field = (p => p.Address.State.Abbreviation);). This code has some very different assumptions in the need to support deeply-nested properties. As for what this code does, I don't really understand it and rather than changing that code, I figured I should just write from scratch this functionality. However, I don't know of a good way to do this. I suspect we can do something better than doing a ToString() and performing string parsing. So what's a good way to do this to handle the trivial and deeply-nested cases? Requirements: Given the expression p => p.FirstName I need a string of "FirstName". Given the expression p => p.Address.State.Abbreviation I need a string of "Address.State.Abbreviation" While it's not important for an answer to my question, I suspect my serialization/deserialization code could be useful to somebody else who finds this question in the future, so it is below. Again, this code is not important to the question - I just thought it might help somebody. Note that DynamicExpression.ParseLambda comes from the Dynamic LINQ stuff and Property.PropertyToString() is what this question is about. /// <summary> /// This defines a framework to pass, across serialized tiers, sorting logic to be performed. /// </summary> /// <typeparam name="TModel">This is the object type that you are filtering.</typeparam> /// <typeparam name="TProperty">This is the property on the object that you are filtering.</typeparam> [Serializable] public class SortOption<TModel, TProperty> : ISerializable where TModel : class { /// <summary> /// Convenience constructor. /// </summary> /// <param name="property">The property to sort.</param> /// <param name="isAscending">Indicates if the sorting should be ascending or descending</param> /// <param name="priority">Indicates the sorting priority where 0 is a higher priority than 10.</param> public SortOption(Expression<Func<TModel, TProperty>> property, bool isAscending = true, int priority = 0) { Property = property; IsAscending = isAscending; Priority = priority; } /// <summary> /// Default Constructor. /// </summary> public SortOption() : this(null) { } /// <summary> /// This is the field on the object to filter. /// </summary> public Expression<Func<TModel, TProperty>> Property { get; set; } /// <summary> /// This indicates if the sorting should be ascending or descending. /// </summary> public bool IsAscending { get; set; } /// <summary> /// This indicates the sorting priority where 0 is a higher priority than 10. /// </summary> public int Priority { get; set; } #region Implementation of ISerializable /// <summary> /// This is the constructor called when deserializing a SortOption. /// </summary> protected SortOption(SerializationInfo info, StreamingContext context) { IsAscending = info.GetBoolean("IsAscending"); Priority = info.GetInt32("Priority"); // We just persisted this by the PropertyName. So let's rebuild the Lambda Expression from that. Property = DynamicExpression.ParseLambda<TModel, TProperty>(info.GetString("Property"), default(TModel), default(TProperty)); } /// <summary> /// Populates a <see cref="T:System.Runtime.Serialization.SerializationInfo"/> with the data needed to serialize the target object. /// </summary> /// <param name="info">The <see cref="T:System.Runtime.Serialization.SerializationInfo"/> to populate with data. </param> /// <param name="context">The destination (see <see cref="T:System.Runtime.Serialization.StreamingContext"/>) for this serialization. </param> public void GetObjectData(SerializationInfo info, StreamingContext context) { // Just stick the property name in there. We'll rebuild the expression based on that on the other end. info.AddValue("Property", Property.PropertyToString()); info.AddValue("IsAscending", IsAscending); info.AddValue("Priority", Priority); } #endregion }

    Read the article

  • Replacing objects, handling clones, dealing with write logs

    - by Alix
    Hi everyone, I'm dealing with a problem I can't figure out how to solve, and I'd love to hear some suggestions. [NOTE: I realise I'm asking several questions; however, answers need to take into account all of the issues, so I cannot split this into several questions] Here's the deal: I'm implementing a system that underlies user applications and that protect shared objects from concurrent accesses. The application programmer (whose application will run on top of my system) defines such shared objects like this: public class MyAtomicObject { // These are just examples of fields you may want to have in your class. public virtual int x { get; set; } public virtual List<int> list { get; set; } public virtual MyClassA objA { get; set; } public virtual MyClassB objB { get; set; } } As you can see they declare the fields of their class as auto-generated properties (auto-generated means they don't need to implement get and set). This is so that I can go in and extend their class and implement each get and set myself in order to handle possible concurrent accesses, etc. This is all well and good, but now it starts to get ugly: the application threads run transactions, like this: The thread signals it's starting a transaction. This means we now need to monitor its accesses to the fields of the atomic objects. The thread runs its code, possibly accessing fields for reading or writing. If there are accesses for writing, we'll hide them from the other transactions (other threads), and only make them visible in step 3. This is because the transaction may fail and have to roll back (undo) its updates, and in that case we don't want other threads to see its "dirty" data. The thread signals it wants to commit the transaction. If the commit is successful, the updates it made will now become visible to everyone else. Otherwise, the transaction will abort, the updates will remain invisible, and no one will ever know the transaction was there. So basically the concept of transaction is a series of accesses that appear to have happened atomically, that is, all at the same time, in the same instant, which would be the moment of successful commit. (This is as opposed to its updates becoming visible as it makes them) In order to hide the write accesses in step 2, I clone the accessed field (let's say it's the field list) and put it in the transaction's write log. After that, any time the transaction accesses list, it will actually be accessing the clone in its write log, and not the global copy everyone else sees. Like this, any changes it makes will be done to the (invisible) clone, not to the global copy. If in step 3 the commit is successful, the transaction should replace the global copy with the updated list it has in its write log, and then the changes become visible for everyone else at once. It would be something like this: myAtomicObject.list = updatedCloneOfListInTheWriteLog; Problem #1: possible references to the list. Let's say someone puts a reference to the global list in a dictionary. When I do... myAtomicObject.list = updatedCloneOfListInTheWriteLog; ...I'm just replacing the reference in the field list, but not the real object (I'm not overwriting the data), so in the dictionary we'll still have a reference to the old version of the list. A possible solution would be to overwrite the data (in the case of a list, empty the global list and add all the elements of the clone). More generically, I would need to copy the fields of one list to the other. I can do this with reflection, but that's not very pretty. Is there any other way to do it? Problem #2: even if problem #1 is solved, I still have a similar problem with the clone: the application programmer doesn't know I'm giving him a clone and not the global copy. What if he puts the clone in a dictionary? Then at commit there will be some references to the global copy and some to the clone, when in truth they should all point to the same object. I thought about providing a wrapper object that contains both the cloned list and a pointer to the global copy, but the programmer doesn't know about this wrapper, so they're not going to use the pointer at all. The wrapper would be like this: public class Wrapper<T> : T { // This would be the pointer to the global copy. The local data is contained in whatever fields the wrapper inherits from T. private T thisPtr; } I do need this wrapper for comparisons: if I have a dictionary that has an entry with the global copy as key, if I look it up with the clone, like this: dictionary[updatedCloneOfListInTheWriteLog] I need it to return the entry, that is, to think that updatedCloneOfListInTheWriteLog and the global copy are the same thing. For this, I can just override Equals, GetHashCode, operator== and operator!=, no problem. However I still don't know how to solve the case in which the programmer unknowingly inserts a reference to the clone in a dictionary. Problem #3: the wrapper must extend the class of the object it wraps (if it's wrapping MyClassA, it must extend MyClassA) so that it's accepted wherever an object of that class (MyClass) would be accepted. However, that class (MyClassA) may be final. This is pretty horrible :$. Any suggestions? I don't need to use a wrapper, anything you can think of is fine. What I cannot change is the write log (I need to have a write log) and the fact that the programmer doesn't know about the clone. I hope I've made some sense. Feel free to ask for more info if something needs some clearing up. Thanks so much!

    Read the article

  • JAVA-SQL- Data Migration - ResultSets comparing Failing JUnit test

    - by user1865053
    I CANNOT get this JUnit Test to pass for the life of me. Can somebody point out where this has gone wrong. I am doing a data migration(MSSQL SERVER 2005), but I have the sourceDBUrl and the targetDCUrl the same URL so to narrow it down to syntax errors. So that is what I have, a syntax error. I am comparing the results of a table for the query SELECT programmeapproval, resourceapproval FROM tr_timesheet WHERE timesheetid = ? and the test always fails, but passes for other junit tests I have developed. I created 3 diffemt resultSetsEqual methods and none work. Yet, some other JUnit tests I have developed have PASSED. THE QUERY: SELECT timesheetid, programmeapproval, resourceapproval FROM tr_timesheet Returns three columns timesheetid (PK,int, not null) (populated with a range of numbers 2240 - 2282) programmeapproval (smallint,not null) (populated with the number 1 in every field) resourceapproval (smallint, not null) (populated with a number 1 in every field) When I run the query that is embedded in the code it only returns one row with the programmeapproval and resourceapproval columns and both field populated with the number 1. I have all jdbc drivers correctly installed and tested for connectivity. The JUnit Test is failing at this point according to the IDE. assertTrue(helper.resultSetsEqual2(sourceVal,targetVal)); This is the code: /*THIS IS A JUNIT CLASS****? package a7.unittests.dao; import static org.junit.Assert.assertTrue; import java.sql.Connection; import java.sql.PreparedStatement; import java.sql.ResultSet; import java.sql.Types; import org.junit.Test; import artemispm.tritonalerts.TimesheetAlert; public class UnitTestTimesheetAlert { @Test public void testQUERY_CHECKALERT() throws Exception{ UnitTestHelper helper = new UnitTestHelper(); Connection con = helper.getConnection(helper.sourceDBUrl); Connection conTarget = helper.getConnection(helper.targetDBUrl); PreparedStatement stmt = con.prepareStatement("select programmeapproval, resourceapproval from tr_timesheet where timesheetid = ?"); stmt.setInt(1, 2240); ResultSet sourceVal = stmt.executeQuery(); stmt = conTarget.prepareStatement("select programmeapproval, resourceapproval from tr_timesheet where timesheetid = ?"); stmt.setInt(1,2240); ResultSet targetVal = stmt.executeQuery(); assertTrue(helper.resultSetsEqual2(sourceVal,targetVal)); }} /*END**/ /*THIS IS A REGULAR CLASS**/ package a7.unittests.dao; import java.sql.Connection; import java.sql.DriverManager; import java.sql.ResultSet; import java.sql.ResultSetMetaData; import java.sql.SQLException; public class UnitTestHelper { static String sourceDBUrl = "jdbc:sqlserver://127.0.0.1:1433;databaseName=a7itm;user=a7user;password=a7user"; static String targetDBUrl = "jdbc:sqlserver://127.0.0.1:1433;databaseName=a7itm;user=a7user;password=a7user"; public Connection getConnection(String url)throws Exception{ return DriverManager.getConnection(url); } public boolean resultSetsEqual3 (ResultSet rs1, ResultSet rs2) throws SQLException { int col = 1; //ResultSetMetaData metadata = rs1.getMetaData(); //int count = metadata.getColumnCount(); while (rs1.next() && rs2.next()) { final Object res1 = rs1.getObject(col); final Object res2 = rs2.getObject(col); // Check values if (!res1.equals(res2)) { throw new RuntimeException(String.format("%s and %s aren't equal at common position %d", res1, res2, col)); } // rs1 and rs2 must reach last row in the same iteration if ((rs1.isLast() != rs2.isLast())) { throw new RuntimeException("The two ResultSets contains different number of columns!"); } } return true; } public boolean resultSetsEqual (ResultSet source, ResultSet target) throws SQLException{ while(source.next()) { target.next(); ResultSetMetaData metadata = source.getMetaData(); int count = metadata.getColumnCount(); for (int i =1; i<=count; i++) { if(source.getObject(i) != target.getObject(i)) { return false; } } } return true; } public boolean resultSetsEqual2 (ResultSet source, ResultSet target) throws SQLException{ while(source.next()) { target.next(); ResultSetMetaData metadata = source.getMetaData(); int count = metadata.getColumnCount(); for (int i =1; i<=count; i++) { if(source.getObject(i).equals(target.getObject(i))) { return false; } } } return true; } } /END***/ /*PASTED NEW CLASS - THIS IS A JUNIT TEST CLASS*/ package a7.unittests.dao; import static org.junit.Assert.*; import java.sql.Connection; import java.sql.DriverManager; import org.junit.Test; public class TestDatabaseConnection { @Test public void testConnection() throws Exception{ UnitTestHelper helper = new UnitTestHelper(); Connection con = helper.getConnection(helper.sourceDBUrl); Connection conTarget = helper.getConnection(helper.targetDBUrl); assertTrue(con != null && conTarget != null); } } /**END***/

    Read the article

  • ERROR: Linux route add command failed: external program exited with error status: 4

    - by JohnMerlino
    A remote machine running fedora uses openvpn, and multiple developers were successfully able to connect to it via their client openvpn. However, I am running Ubuntu 12.04 and I am having trouble connecting to the server via vpn. I copied ca.crt, home.key, and home.crt from the server to my local machine to /etc/openvpn folder. My client.conf file looks like this: ############################################## # Sample client-side OpenVPN 2.0 config file # # for connecting to multi-client server. # # # # This configuration can be used by multiple # # clients, however each client should have # # its own cert and key files. # # # # On Windows, you might want to rename this # # file so it has a .ovpn extension # ############################################## # Specify that we are a client and that we # will be pulling certain config file directives # from the server. client # Use the same setting as you are using on # the server. # On most systems, the VPN will not function # unless you partially or fully disable # the firewall for the TUN/TAP interface. ;dev tap dev tun # Windows needs the TAP-Win32 adapter name # from the Network Connections panel # if you have more than one. On XP SP2, # you may need to disable the firewall # for the TAP adapter. ;dev-node MyTap # Are we connecting to a TCP or # UDP server? Use the same setting as # on the server. ;proto tcp proto udp # The hostname/IP and port of the server. # You can have multiple remote entries # to load balance between the servers. remote xx.xxx.xx.130 1194 ;remote my-server-2 1194 # Choose a random host from the remote # list for load-balancing. Otherwise # try hosts in the order specified. ;remote-random # Keep trying indefinitely to resolve the # host name of the OpenVPN server. Very useful # on machines which are not permanently connected # to the internet such as laptops. resolv-retry infinite # Most clients don't need to bind to # a specific local port number. nobind # Downgrade privileges after initialization (non-Windows only) ;user nobody ;group nogroup # Try to preserve some state across restarts. persist-key persist-tun # If you are connecting through an # HTTP proxy to reach the actual OpenVPN # server, put the proxy server/IP and # port number here. See the man page # if your proxy server requires # authentication. ;http-proxy-retry # retry on connection failures ;http-proxy [proxy server] [proxy port #] # Wireless networks often produce a lot # of duplicate packets. Set this flag # to silence duplicate packet warnings. ;mute-replay-warnings # SSL/TLS parms. # See the server config file for more # description. It's best to use # a separate .crt/.key file pair # for each client. A single ca # file can be used for all clients. ca ca.crt cert home.crt key home.key # Verify server certificate by checking # that the certicate has the nsCertType # field set to "server". This is an # important precaution to protect against # a potential attack discussed here: # http://openvpn.net/howto.html#mitm # # To use this feature, you will need to generate # your server certificates with the nsCertType # field set to "server". The build-key-server # script in the easy-rsa folder will do this. ns-cert-type server # If a tls-auth key is used on the server # then every client must also have the key. ;tls-auth ta.key 1 # Select a cryptographic cipher. # If the cipher option is used on the server # then you must also specify it here. ;cipher x # Enable compression on the VPN link. # Don't enable this unless it is also # enabled in the server config file. comp-lzo # Set log file verbosity. verb 3 # Silence repeating messages ;mute 20 But when I start server and look in /var/log/syslog, I notice the following error: May 27 22:13:51 myuser ovpn-client[5626]: /sbin/route add -net 10.27.12.1 netmask 255.255.255.252 gw 10.27.12.37 May 27 22:13:51 myuser ovpn-client[5626]: ERROR: Linux route add command failed: external program exited with error status: 4 May 27 22:13:51 myuser ovpn-client[5626]: /sbin/route add -net 172.27.12.0 netmask 255.255.255.0 gw 10.27.12.37 May 27 22:13:51 myuser ovpn-client[5626]: /sbin/route add -net 10.27.12.1 netmask 255.255.255.255 gw 10.27.12.37 And I am unable to connect to the server via openvpn: $ ssh [email protected] ssh: connect to host xxx.xx.xx.130 port 22: No route to host What may I be doing wrong?

    Read the article

  • How to know whether to create a general system or to hack a solution

    - by Andy K
    I'm new to coding , learning it since last year actually. One of my worst habits is the following: Often I'm trying to create a solution that is too big , too complex and doesn't achieve what needs to be achieved, when a hacky kludge can make the fit. One last example was the following (see paste bin link below) http://pastebin.com/WzR3zsLn After explaining my issue, one nice person at stackoverflow came with this solution instead http://stackoverflow.com/questions/25304170/update-a-field-by-removing-quarter-or-removing-month When should I keep my code simple and when should I create a 'big', general solution? I feel stupid sometimes for building something so big, so awkward, just to solve a simple problem. It did not occur to me that there would be an easier solution. Any tips are welcomed. Best

    Read the article

  • Creating site with wix.com or weebly.com [on hold]

    - by Edgar
    I decided to create web page and for that purpose I find out that I can use wix.com portal. My knowledge of HTML,CSS is on basic level. So I want to ask what are pros and cons of creating webPages using WIX to compare with making your on your own (writing code by yourself). One of the questions is: can I put custom advertisements to the my page. Also would appreciate of suggestions what portal is better for wix.com or weebly.com or for good website I should choose codding by myself? Finally, would be nice to get any suggestions in this field.

    Read the article

  • Should I create separate Work and Personal Github accounts?

    - by Almost Surely
    I'm fairly new to programming, and I've been working on many personal projects, which I'm concerned can come across as silly/unprofessional. The kind of projects I have are a Reddit Image Downloader and a tool for GM's to use in roleplaying games. I want to start building up a Github for projects in my chosen field of Data Analytics, but I'm not sure how to orgaqnize projects on my Github account. Should I create a "Professional" Github, mainly containing different analytical scripts and have a separate "Personal" account for fun little projects of mine? Or am I just overthinking this and should I just maintain account?

    Read the article

  • How to increase screen resolution in Hyper-V

    - by Saad
    I am using Ubuntu 12.04 on Hyper-V in windows 8. I want to increase the resolution so that Ubuntu window can occupy my whole screen. Does anyone have any idea of how it can be done ? I found one solution http://nramkumar.org/tech/blog/2013/05/04/ubuntu-under-hyper-v-how-to-overcome-screen-resolution-issue/, I have installed OpenSSH server on Ubuntu and Xming and Putty on windows. I am not sure what hostname to use to connect to Ubuntu (running in Hyper-V), using [email protected] or username@localhost in the hostname field returns error "Network error:connection refused". Can someone help me figure out what i am doing wrong ?

    Read the article

  • Adopting DBVCS

    - by Wes McClure
    Identify early adopters Pick a small project with a small(ish) team.  This can be a legacy application or a green-field application. Strive to find a team of early adopters that will be eager to try something new. Get the team on board! Research Research the tool(s) that you want to use.  Some tools provide all of the features you would need while some only provide a slice of the pie.  DBVCS requires the ability to manage a set of change scripts that update a database from one version to the next.  Ideally a tool can track database versions and automatically apply updates.  The change script generation process can be manual, but having diff tools available to automatically generate it can really reduce the overhead to adoption.  Finally, an automated tool to generate a script file per database object is an added bonus as your version control system can quickly identify what was changed in a commit (add/del/modify), just like with code changes. Don’t settle on just one tool, identify several.  Then work with the team to evaluate the tools.  Have the team do some tests of the following scenarios with each tool: Baseline an existing database: can the migration tool work with legacy databases?  Caution: most migration platforms do not support baselines or have poor support, especially the fad of fluent APIs. Add/drop tables Add/drop procedures/functions/views Alter tables (rename columns, add columns, remove columns) Massage data – migrations sometimes involve changing data types that cannot be implicitly casted and require you to decide how the data is explicitly cast to the new type.  This is a requirement for a migrations platform.  Think about a case where you might want to combine fields, or move a field from one table to another, you wouldn’t want to lose the data. Run the tool via the command line.  If you cannot automate the tool in Continuous Integration what is the point? Create a copy of a database on demand. Backup/restore databases locally. Let the team give feedback and decide together, what tool they would like to try out. My recommendation at this point would be to include TSqlMigrations and RoundHouse as SQL based migration platforms.  In general I would recommend staying away from the fluent platforms as they often lack baseline capabilities and add overhead to learn a new API when SQL is already a very well known DSL.  Code migrations often get messy with procedures/views/functions as these have to be created with SQL and aren’t cross platform anyways.  IMO stick to SQL based migrations. Reconciling Production If your project is a legacy application, you will need to reconcile the current state of production with your development databases.  Find changes in production and bring them down to development, even if they are old and need to be removed.  Once complete, produce a baseline of either dev or prod as they are now in sync.  Commit this to your VCS of choice. Add whatever schema changes tracking mechanism your tool requires to your development database.  This often requires adding a table to track the schema version of that database.  Your tool should support doing this for you.  You can add this table to production when you do your next release. Script out any changes currently in dev.  Remove production artifacts that you brought down during reconciliation.  Add change scripts for any outstanding changes in dev since the last production release.  Commit these to your repository.   Say No to Shared Dev DBs Simply put, you wouldn’t dream of sharing a code checkout, why would you share a development database?  If you have a shared dev database, back it up, distribute the backups and take the shared version offline (including the dev db server once all projects are using DB VCS).  Doing DB VCS with a shared database is bound to cause problems as people won’t be able to easily script out their own changes from those that others are working on.   First prod release Copy prod to your beta/testing environment.  Add the schema changes table (or mechanism) and do a test run of your changes.  If successful you can schedule this to be run on production.   Evaluation After your first release, evaluate the pain points of the process.  Try to find tools or modifications to existing tools to help fix them.  Don’t leave stones unturned, iteratively evolve your tools and practices to make the process as seamless as possible.  This is why I suggest open source alternatives.  Nothing is set in stone, a good example was adding transactional support to TSqlMigrations.  We ran into situations where an update would break a database, so I added a feature to do transactional updates and rollback on errors!  Another good example is generating change scripts.  We have been manually making these for months now.  I found an open source project called Open DB Diff and integrated this with TSqlMigrations.  These were things we just accepted at the time when we began adopting our tool set.  Once we became comfortable with the base functionality, it was time to start automating more of the process.  Just like anything else with development, never be afraid to try to find tools to make your job easier!   Enjoy -Wes

    Read the article

  • How to build a game like HaxBall?

    - by Cengiz Frostclaw
    I'm a coder interested in game development, and I want to build a very simple p2p (real time) game. The perfect example for this is haxball. The client/server model doesn't matter, it could be like haxball (i.e. the room creator is the host) or changing host, or the server is host (if there is such a thing) I just want to learn the basics to create a field where players can move their characters in real time. So where should I start? Please guide me into the right direction. And with examples please. I found out that RTMFP does what I seek, but how do I use it? I know ActionScript 3.0, but no idea how to setup a multiplayer game. Please help ! Thanks in advance.

    Read the article

  • Getting selected row in inputListOfValues returnPopupListener

    - by Frank Nimphius
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Model driven list-of-values in Oracle ADF are configured on the ADF Business component attribute which should be updated with the user value selection. The value lookup can be configured to be displayed as a select list, combo box, input list of values or combo box with list of values. Displaying the list in an af:inputListOfValues component shows the attribute value in an input text field and with an icon attached to it for the user to launch the list-of-values dialog. The list-of-values dialog allows users to use a search form to filter the lookup data list and to select an entry, which return value then is added as the value of the af:inputListOfValues component. Note: The model driven LOV can be configured in ADF Business Components to update multiple attributes with the user selection, though the most common use case is to update the value of a single attribute. A question on OTN was how to access the row of the selected return value on the ADF Faces front end. For this, you need to know that there is a Model property defined on the af:inputListOfValues that references the ListOfValuesModel implementation in the model. It is the value of this Model property that you need to get access to. The af:inputListOfValues has a ReturnPopupListener property that you can use to configure a managed bean method to receive notification when the user closes the LOV popup dialog by selecting the Ok button. This listener is not triggered when the cancel button is pressed. The managed bean signature can be created declaratively in Oracle JDeveloper 11g using the Edit option in the context menu next to the ReturnPopupListener field in the PropertyInspector. The empty method signature looks as shown below public void returnListener(ReturnPopupEvent returnPopupEvent) { } The ReturnPopupEvent object gives you access the RichInputListOfValues component instance, which represents the af:inputListOfValues component at runtime. From here you access the Model property of the component to then get a handle to the CollectionModel. The CollectionModel returns an instance of JUCtrlHierBinding in its getWrappedData method. Though there is no tree binding definition for the list of values dialog defined in the PageDef, it exists. Once you have access to this, you can read the row the user selected in the list of values dialog. See the following code: public void returnListener(ReturnPopupEvent returnPopupEvent) {   //access UI component instance from return event RichInputListOfValues lovField =        (RichInputListOfValues)returnPopupEvent.getSource();   //The LOVModel gives us access to the Collection Model and //ADF tree binding used to populate the lookup table ListOfValuesModel lovModel =  lovField.getModel(); CollectionModel collectionModel =          lovModel.getTableModel().getCollectionModel();     //The collection model wraps an instance of the ADF //FacesCtrlHierBinding, which is casted to JUCtrlHierBinding   JUCtrlHierBinding treeBinding =          (JUCtrlHierBinding) collectionModel.getWrappedData();     //the selected rows are defined in a RowKeySet.As the LOV table only   //supports single selections, there is only one entry in the rks RowKeySet rks = (RowKeySet) returnPopupEvent.getReturnValue();     //the ADF Faces table row key is a list. The list contains the //oracle.jbo.Key List tableRowKey = (List) rks.iterator().next();   //get the iterator binding for the LOV lookup table binding   DCIteratorBinding dciter = treeBinding.getDCIteratorBinding();   //get the selected row by its JBO key   Key key = (Key) tableRowKey.get(0); Row rw =  dciter.findRowByKeyString(key.toStringFormat(true)); //work with the row // ... }

    Read the article

  • SEO question, getting info about websites

    - by michael
    I don't know much about SEO. I have a csv file with 200,000 links to websites i want to add another field (or maybe couple of them) to each link in the csv file with page ranking of each link and maybe other interesting metrics and info about the link. I saw a free API from http://apiwiki.seomoz.org/ i can maybe use to build a simple script, but it's limited to 3 links for second which will roughly take 1100 minutes or 18 hours to run any other ideas how to get this kind of simple metrics about each link ? thanks !

    Read the article

  • How to deploy Document Set using CAML in SharePoint2010 solution package

    - by ybbest
    In my last post, I showed you how to use Document Set using SharePoint UI in the browser. In this post, I’d like to show you how to create the same Document Set using CAML and SharePoint solution package. You can download the complete solution here. 1. Create the Application Number site column using the SharePoint empty element item template in VS2010 <?xml version="1.0" encoding="utf-8"?> <Elements xmlns="http://schemas.microsoft.com/sharepoint/"> <Field Type="Text" DisplayName="ApplicationNumber" Required="FALSE" EnforceUniqueValues="FALSE" Indexed="FALSE" MaxLength="255" Group="YBBEST" ID="{916bf3af-5ec1-4441-acd8-88ff62ab1b7e}" Name="ApplicationNumber" ></Field> </Elements> 2. Create the Loan Application Form and Loan Contract Form content types. <?xml version="1.0" encoding="utf-8"?> <Elements xmlns="http://schemas.microsoft.com/sharepoint/"> <!-- Parent ContentType: Document (0x0101) --> <ContentType ID="0x0101005dfbf820ce3c49f69c73a00e0e0e53f6" Name="Loan Contract Form" Group="YBBEST" Description="Loan Contract Form" Inherits="TRUE" Version="0"> <FieldRefs> <FieldRef ID="916bf3af-5ec1-4441-acd8-88ff62ab1b7e" Name="ApplicationNumber" DisplayName="ApplicationNumber" /> </FieldRefs> </ContentType> <!-- Parent ContentType: Document (0x0101) --> <ContentType ID="0x010100f3016e3d03454b93bc4d6ab63941c0d2" Name="Loan Application Form" Group="YBBEST" Description="Loan Application Form" Inherits="TRUE" Version="0"> <FieldRefs> <FieldRef ID="916bf3af-5ec1-4441-acd8-88ff62ab1b7e" Name="ApplicationNumber" DisplayName="ApplicationNumber" /> </FieldRefs> </ContentType> </Elements> 3. Create the Loan Application Document Set. 4. Create the Document Set Welcome Page using the SharePoint Module item template. Notes: 1.When creating document set content type , you need to set the  Inherits=”FALSE”  or remove the  Inherits=”TRUE” from the content type definition (default is  Inherits=”FALSE”) . This is the Document Set limitation in the current version of SharePoint2010. Because of this , you also need to manually  attach the event receiver and  Document Set welcome page to your custom Document Set Content Type. 2. Shared Fields are push down only: 3. Not available in SharePoint foundation (only SharePoint Server 2010). 4. You can’t have folders within document sets (you can place document sets in folders though). For a complete limitation and considerations , you can see the references for details. References: Document Set Limitations and Considerations in SharePoint 2010 1 Document Set Limitations and Considerations in SharePoint 2010 2 Document Sets planning (SharePoint Server 2010) Import Document Sets Issue http://msdn.microsoft.com/en-us/library/gg581064.aspx http://channel9.msdn.com/Events/TechEd/NorthAmerica/2010/OSP305 DocumentSet Class

    Read the article

  • Video: CodeRush Plugin Makes XPO Fields Easier

    Check out this CodeRush plugin screencast about XPO_EasyFields. XPO_EasyFields is a new CodeRush Plugin created by Michael Proctor. XPO_EasyFields plugin allows for use of XPO Simplified Criteria Syntax which creates and updates the PersistentBase.FieldsClass of your object for easy field reference. Download XPO_EasyFields for free Then check out Michael's blog to learn more: XPO_EasyFields, makes your life easy with XPO  XPO from the beginning, Part 1, Basic Information...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • How to deploy Document Set using CAML in SharePoint2010 solution package

    - by ybbest
    In my last post, I showed you how to use Document Set using SharePoint UI in the browser. In this post, I’d like to show you how to create the same Document Set using CAML and SharePoint solution package. You can download the complete solution here. 1. Create the Application Number site column using the SharePoint empty element item template in VS2010 <?xml version="1.0" encoding="utf-8"?> <Elements xmlns="http://schemas.microsoft.com/sharepoint/"> <Field Type="Text" DisplayName="ApplicationNumber" Required="FALSE" EnforceUniqueValues="FALSE" Indexed="FALSE" MaxLength="255" Group="YBBEST" ID="{916bf3af-5ec1-4441-acd8-88ff62ab1b7e}" Name="ApplicationNumber" ></Field> </Elements> 2. Create the Loan Application Form and Loan Contract Form content types. <?xml version="1.0" encoding="utf-8"?> <Elements xmlns="http://schemas.microsoft.com/sharepoint/"> <!-- Parent ContentType: Document (0x0101) --> <ContentType ID="0x0101005dfbf820ce3c49f69c73a00e0e0e53f6" Name="Loan Contract Form" Group="YBBEST" Description="Loan Contract Form" Inherits="TRUE" Version="0"> <FieldRefs> <FieldRef ID="916bf3af-5ec1-4441-acd8-88ff62ab1b7e" Name="ApplicationNumber" DisplayName="ApplicationNumber" /> </FieldRefs> </ContentType> <!-- Parent ContentType: Document (0x0101) --> <ContentType ID="0x010100f3016e3d03454b93bc4d6ab63941c0d2" Name="Loan Application Form" Group="YBBEST" Description="Loan Application Form" Inherits="TRUE" Version="0"> <FieldRefs> <FieldRef ID="916bf3af-5ec1-4441-acd8-88ff62ab1b7e" Name="ApplicationNumber" DisplayName="ApplicationNumber" /> </FieldRefs> </ContentType> </Elements> 3. Create the Loan Application Document Set. 4. Create the Document Set Welcome Page using the SharePoint Module item template. Notes: 1.When creating document set content type , you need to set the  Inherits=”FALSE”  or remove the  Inherits=”TRUE” from the content type definition (default is  Inherits=”FALSE”) . This is the Document Set limitation in the current version of SharePoint2010. Because of this , you also need to manually  attach the event receiver and  Document Set welcome page to your custom Document Set Content Type. 2. Shared Fields are push down only: 3. Not available in SharePoint foundation (only SharePoint Server 2010). 4. You can’t have folders within document sets (you can place document sets in folders though). For a complete limitation and considerations , you can see the references for details. References: Document Set Limitations and Considerations in SharePoint 2010 1 Document Set Limitations and Considerations in SharePoint 2010 2 Document Sets planning (SharePoint Server 2010) Import Document Sets Issue http://msdn.microsoft.com/en-us/library/gg581064.aspx http://channel9.msdn.com/Events/TechEd/NorthAmerica/2010/OSP305 DocumentSet Class

    Read the article

  • Integrate BING API for Search inside ASP.Net web application

    - by sreejukg
    As you might already know, Bing is the Microsoft Search engine and is getting popular day by day. Bing offers APIs that can be integrated into your website to increase your website functionality. At this moment, there are two important APIs available. They are Bing Search API Bing Maps The Search API enables you to build applications that utilize Bing’s technology. The API allows you to search multiple source types such as web; images, video etc. and supports various output prototypes such as JSON, XML, and SOAP. Also you will be able to customize the search results as you wish for your public facing website. Bing Maps API allows you to build robust applications that use Bing Maps. In this article I am going to describe, how you can integrate Bing search into your website. In order to start using Bing, First you need to sign in to http://www.bing.com/toolbox/bingdeveloper/ using your windows live credentials. Click on the Sign in button, you will be asked to enter your windows live credentials. Once signed in you will be redirected to the Developer page. Here you can create applications and get AppID for each application. Since I am a first time user, I don’t have any applications added. Click on the Add button to add a new application. You will be asked to enter certain details about your application. The fields are straight forward, only thing you need to note is the website field, here you need to enter the website address from where you are going to use this application, and this field is optional too. Of course you need to agree on the terms and conditions and then click Save. Once you click on save, the application will be created and application ID will be available for your use. Now we got the APP Id. Basically Bing supports three protocols. They are JSON, XML and SOAP. JSON is useful if you want to call the search requests directly from the browser and use JavaScript to parse the results, thus JSON is the favorite choice for AJAX application. XML is the alternative for applications that does not support SOAP, e.g. flash/ Silverlight etc. SOAP is ideal for strongly typed languages and gives a request/response object model. In this article I am going to demonstrate how to search BING API using SOAP protocol from an ASP.Net application. For the purpose of this demonstration, I am going to create an ASP.Net project and implement the search functionality in an aspx page. Open Visual Studio, navigate to File-> New Project, select ASP.Net empty web application, I named the project as “BingSearchSample”. Add a Search.aspx page to the project, once added the solution explorer will looks similar to the following. Now you need to add a web reference to the SOAP service available from Bing. To do this, from the solution explorer, right click your project, select Add Service Reference. Now the new service reference dialog will appear. In the left bottom of the dialog, you can find advanced button, click on it. Now the service reference settings dialog will appear. In the bottom left, you can find Add Web Reference button, click on it. The add web reference dialog will appear now. Enter the URL as http://api.bing.net/search.wsdl?AppID=<YourAppIDHere>&version=2.2 (replace <yourAppIDHere> with the appID you have generated previously) and click on the button next to it. This will find the web service methods available. You can change the namespace suggested by Bing, but for the purpose of this demonstration I have accepted all the default settings. Click on the Add reference button once you are done. Now the web reference to Search service will be added your project. You can find this under solution explorer of your project. Now in the Search.aspx, that you previously created, place one textbox, button and a grid view. For the purpose of this demonstration, I have given the identifiers (ID) as txtSearch, btnSearch, gvSearch respectively. The idea is to search the text entered in the text box using Bing service and show the results in the grid view. In the design view, the search.aspx looks as follows. In the search.aspx.cs page, add a using statement that points to net.bing.api. I have added the following code for button click event handler. The code is very straight forward. It just calls the service with your AppID, a query to search and a source for searching. Let us run this page and see the output when I enter Microsoft in my textbox. If you want to search a specific site, you can include the site name in the query parameter. For e.g. the following query will search the word Microsoft from www.microsoft.com website. searchRequest.Query = “site:www.microsoft.com Microsoft”; The output of this query is as follows. Integrating BING search API to your website is easy and there is no limit on the customization of the interface you can do. There is no Bing branding required so I believe this is a great option for web developers when they plan for site search.

    Read the article

  • Oracle Enterprise Data Quality: Ever Integration-ready

    - by Mala Narasimharajan
    It is closing in on a year now since Oracle’s acquisition of Datanomic, and the addition of Oracle Enterprise Data Quality (EDQ) to the Oracle software family. The big move has caused some big shifts in emphasis and some very encouraging excitement from the field.  To give an illustration, combined with a shameless promotion of how EDQ can help to give quick insights into your data, I did a quick Phrase Profile of the subject field of emails to the Global EDQ mailing list since it was set up last September. The results revealed a very clear theme:   Integration, Integration, Integration! As well as the important Siebel and Oracle Data Integrator (ODI) integrations, we have been asked about integration with a huge variety of Oracle applications, including EBS, Peoplesoft, CRM on Demand, Fusion, DRM, Endeca, RightNow, and more - and we have not stood still! While it would not have been possible to develop specific pre-integrations with all of the above within a year, we have developed a package of feature-rich out-of-the-box web services and batch processes that can be plugged into any application or middleware technology with ease. And with Siebel, they work out of the box. Oracle Enterprise Data Quality version 9.0.4 includes the Customer Data Services (CDS) pack – a ready set of standard processes with standard interfaces, to provide integrated: Address verification and cleansing  Individual matching Organization matching The services can are suitable for either Batch or Real-Time processing, and are enabled for international data, with simple configuration options driving the set of locale-specific dictionaries that are used. For example, large dictionaries are provided to support international name transcription and variant matching, including highly specialized handling for Arabic, Japanese, Chinese and Korean data. In total across all locales, CDS includes well over a million dictionary entries.   Excerpt from EDQ’s CDS Individual Name Standardization Dictionary CDS has been developed to replace the OEM of Informatica Identity Resolution (IIR) for attached Data Quality on the Oracle price list, but does this in a way that creates a ‘best of both worlds’ situation for customers, who can harness not only the out-of-the-box functionality of pre-packaged matching and standardization services, but also the flexibility of OEDQ if they want to customize the interfaces or the process logic, without having to learn more than one product. From a competitive point of view, we believe this stands us in good stead against our key competitors, including Informatica, who have separate ‘Identity Resolution’ and general DQ products, and IBM, who provide limited out-of-the-box capabilities (with a steep learning curve) in both their QualityStage data quality and Initiate matching products. Here is a brief guide to the main services provided in the pack: Address Verification and Standardization EDQ’s CDS Address Cleaning Process The Address Verification and Standardization service uses EDQ Address Verification (an OEM of Loqate software) to verify and clean addresses in either real-time or batch. The Address Verification processor is wrapped in an EDQ process – this adds significant capabilities over calling the underlying Address Verification API directly, specifically: Country-specific thresholds to determine when to accept the verification result (and therefore to change the input address) based on the confidence level of the API Optimization of address verification by pre-standardizing data where required Formatting of output addresses into the input address fields normally used by applications Adding descriptions of the address verification and geocoding return codes The process can then be used to provide real-time and batch address cleansing in any application; such as a simple web page calling address cleaning and geocoding as part of a check on individual data.     Duplicate Prevention Unlike Informatica Identity Resolution (IIR), EDQ uses stateless services for duplicate prevention to avoid issues caused by complex replication and synchronization of large volume customer data. When a record is added or updated in an application, the EDQ Cluster Key Generation service is called, and returns a number of key values. These are used to select other records (‘candidates’) that may match in the application data (which has been pre-seeded with keys using the same service). The ‘driving record’ (the new or updated record) is then presented along with all selected candidates to the EDQ Matching Service, which decides which of the candidates are a good match with the driving record, and scores them according to the strength of match. In this model, complex multi-locale EDQ techniques can be used to generate the keys and ensure that the right balance between performance and matching effectiveness is maintained, while ensuring that the application retains control of data integrity and transactional commits. The process is explained below: EDQ Duplicate Prevention Architecture Note that where the integration is with a hub, there may be an additional call to the Cluster Key Generation service if the master record has changed due to merges with other records (and therefore needs to have new key values generated before commit). Batch Matching In order to allow customers to use different match rules in batch to real-time, separate matching templates are provided for batch matching. For example, some customers want to minimize intervention in key user flows (such as adding new customers) in front end applications, but to conduct a more exhaustive match on a regular basis in the back office. The batch matching jobs are also used when migrating data between systems, and in this case normally a more precise (and automated) type of matching is required, in order to minimize the review work performed by Data Stewards.  In batch matching, data is captured into EDQ using its standard interfaces, and records are standardized, clustered and matched in an EDQ job before matches are written out. As with all EDQ jobs, batch matching may be called from Oracle Data Integrator (ODI) if required. When working with Siebel CRM (or master data in Siebel UCM), Siebel’s Data Quality Manager is used to instigate batch jobs, and a shared staging database is used to write records for matching and to consume match results. The CDS batch matching processes automatically adjust to Siebel’s ‘Full Match’ (match all records against each other) and ‘Incremental Match’ (match a subset of records against all of their selected candidates) modes. The Future The Customer Data Services Pack is an important part of the Oracle strategy for EDQ, offering a clear path to making Data Quality Assurance an integral part of enterprise applications, and providing a strong value proposition for adopting EDQ. We are planning various additions and improvements, including: An out-of-the-box Data Quality Dashboard Even more comprehensive international data handling Address search (suggesting multiple results) Integrated address matching The EDQ Customer Data Services Pack is part of the Enterprise Data Quality Media Pack, available for download at http://www.oracle.com/technetwork/middleware/oedq/downloads/index.html.

    Read the article

  • On Contract Employment

    - by kerry
    I am going to post about something I don’t post about a lot, the business side of development.  Scott at the antipimp does a good job of explaining how contracts work from a business perspective.  I am going to give a view from the ground. First, a little background on myself.  I have recently taken a 6 month contract after about 8 years of fulltime employment.  I have 2 kids, and a stay at home wife.  I took this contract opportunity because I wanted to try it on for size.  I have always wondered whether I would like doing contracts over fulltime employment.  So, in keeping with the theme of this blog I will write this down now so that I may reference it later. ALL jobs are temporary! Right now you may not realize it, most people simply ignore it, but EVERY job is temporary.  Everyone should be planning for life after the money stops coming in.  Sadly, most people do not.  Contracting pushes this issue to the forefront, making you deal with it.  After a month on a contract, I am happy to say that I am saving more than I ever saved in a fulltime position.  Hopefully, I will be ready in case of an extended window of unemployment between contracts. Networking I find it extremely gratifying getting to know people.  It is especially beneficial when moving to a new city.  What better way to go out and meet people in your field than to work a few contracts?  6 months of working beside someone and you get to know them pretty well.  This is one of my favorite aspects. Technical Agility Moving between IS shops takes (or molds you into) a flexible person.  You have to be able to go in and hit the ground running.  This means you need to be able to sit down and start work on a large codebase working in a language that you may or may not have that much experience in.  It is also an excellent way to learn new languages and broaden your technical skill set.  I took my current position to learn Ruby.  A month ago, I had only used it in passing, but now I am using it every day.  It’s a tragedy in this field when people start coding for the joy and love of coding, then become deeply entrenched in their companies methods and technologies that it becomes a just a job. Less Stress I am not talking about the kind of stress you get from a jackass boss.  I am talking about the kind of stress I (or others) experience about planning and future proofing your code.  Not saying I stay up at night worrying whether we have done it right, if that code I wrote today is going to bite me later, but it still creeps around in the dark recesses of my mind.  Careful though, I am not suggesting you write sloppy code; just defer any large architectural or design decisions to the ‘code owners’. Flexible Scheduling It makes me very happy to be able to cut out a few hours early on a Friday (provided the work is done) and start the weekend off early by going to the pool, or taking the kids to the park.  Contracting provides you this opportunity (mileage may vary).  Most of your fulltime brethren will not care, they will be jealous that they’re corporate policy prevents them from doing the same.  However, you must be mindful of situations where this is not appropriate, and don’t over do it.  You are there to work after all. Affirmation of Need Have you ever been stuck in a job where you thought you were underpaid?  Have you ever been in a position where you felt like there was not enough workload for you?  This is not a problem for contractors.  When you start a contract it is understood that you are needed, and the employer knows that you are happy with the terms. Contracting may not be for everyone.  But, if you develop a relationship with a good consulting firm, keep their clients happy, then they will keep you happy.  They want you to work almost as much as you do.  Just be sure and plan financially for any windows of unemployment.

    Read the article

  • We're Hiring! - Server and Desktop Virtualization Product Management

    - by adam.hawley
    There is a lot of exciting stuff going on here at Oracle in general but the server and desktop virtualization group in particular is deeply involved in executing on Oracle's strategy for delivering complete hardware-software solutions across the company, so we're expanding our team with several open positions. If you're interested and qualified, then please send us your resume. The three positions in Virtualization Product Management can be found by going here or going to the Employment Opportunities Job Search page, clicking on 'Advanced Search' and typing the job opening numbers (include 'IRC'... see below) in the 'Keywords' field. Click Search. Current openings are... IRC1457623: Oracle VM Product Management IRC1457626: Desktop Virtualization Application Solutions Product Management IRC1473577: Oracle VM Best Practices Implementation Engineer (Product Management) I look forward to hearing from you!

    Read the article

< Previous Page | 195 196 197 198 199 200 201 202 203 204 205 206  | Next Page >