Search Results

Search found 73253 results on 2931 pages for 'enterprise data quality solutions and news'.

Page 348/2931 | < Previous Page | 344 345 346 347 348 349 350 351 352 353 354 355  | Next Page >

  • Excel - Reuse a trend line to apply to other data

    - by milko
    I've obtained a trend line from a particular set of data. What I'd like to do now is to reuse this trend line to predict values from a given pair (x,y) of coordinates. To put it another way, I have one pair (x,y) that I know is correct for sure. I don't know any other point. Let's assume the behavior of this new set is similar to the one I've got the trend line from. Is there any way Excel could compute other points following this trend line?

    Read the article

  • Mirrored servers in data centers nationwide -- how?

    - by Sysadmin Evstar
    Mirrored servers in data centers nationwide -- how? I flunked my IT interview by getting this question wrong. I thought that in the various metropolitan areas, an "http://google.com" request goes to the ISP's DNS server, which somehow returns an IP address for one of several geographically-nearby http servers, and then something internally rolls over to the next available local Google server. But then, I could not explain where the table of available local Google servers is actually cached, or the details of the IP address rollover. Or how they could manually take some server out of the rotation, from anywhere. So, what should I be reading now so I can ace this question next time? Also, what daemons run on these machines 24/7 to keep all those mirrored database disks synchronized?

    Read the article

  • Archive format & tool for large amounts of data (50gb+)

    - by marcusstarnes
    I only realised this afternoon that the ZIP format has a limit of what appears to be around 20gb. I am trying to automate an archive process (using Automate) to zip/rar/whatever a collection of folders/files on one of my disks. It always appeared to bomb out with an incomplete archive at about 20gb. So I tried using WinRAR and doing it manually as a ZIP file, but it told me of the limit. So, I was wondering, what is a recommended zip format (and tool for accomplishing the task) for archiving up a large amount of data (around 50gb)?

    Read the article

  • minimum required bandwidth for remote database server

    - by user66734
    I want to build a small warehousing application for my company. We have a central warehouse which distributes to 8 sales points across the country. They insist on an in-house solution. I am thinking to setup a central mySQL db Linux server and have the branches connect to it to store sales. Queries to the db from the branches will be minimum, maybe 10 per hour. However I need all the branches to be able to store each sale data ( product ID, customer ID ) in the central db at peak time at most once every five minutes. My question is can I get away with simple 24mbps/768kbps DSL lines? If not what is the bandwith requirement? Can I rely on a load balancing router to combine additional lines if needed? Can you propose some server hardware specs?

    Read the article

  • Analyse frequencies of date ranges in Google Drive

    - by wnstnsmth
    I have a Google Drive spreadsheet where I would like to compute occurrences of date ranges. As you can see in my sheet, there is a column date_utc+1 which contains almost random date data. https://docs.google.com/spreadsheet/ccc?key=0AhqMXeYxWMD_dGRkVGRqbkR3c05mWUdhYkJWcFo2Mmc What I would like to do is 1) put the date values into bins of 6 hours each, i.e. 12/5/2012 23:57:04 until 12/6/2012 0:03:17 would be in the first bin, 12/6/2012 11:20:53 until 12/6/2012 17:17:07 in the second bin, and so forth. Then, I would like to count the occurrence of those bins, such as bin_from bin_to freq ----------------------------------------------- 12/5/2012 23:57:04 12/6/2012 0:03:17 2 12/6/2012 11:20:53 12/6/2012 17:17:07 19 ... ... ... Hope it is clear what I mean. Partial hints are very welcome as well since I am pretty new to spreadsheeting.

    Read the article

  • Excel 2010 Move data from multiple columns to single row

    - by frustrated529
    So frustrating! I get data sent to me and it looks like this: a 1 a --2 2 a-------3 3 b 1 b-- 2 2 b ------ 3 3 b------------ 4 4 and i need it to look like this: a 1 2 2 3 3 b 1 2 2 3 3 4 4 I have about 30 columns that needs to move to the top value in their group, then removing the duplicates. I have been searching forums for several days and trying bits and pieces of code. I am having such a tough time with VBA!!!!

    Read the article

  • WUBI installation wiped hard-drive?

    - by gkaykck
    Here is what happened, i ve installed xubuntu via wubi on my D: drive. I have 2 drives by the way C: and D: Basically i use C: drive for windows and D drive for rest and backup as everybody does. And i installed my WUBI installation on drive D: too. Than i tried to do a little extreme thing. Which is basically i tried to make a shortcut to D: folder within Xubuntu. The problem is suddenly all my files disappeared. Folders stayed same, but files disappeared. Also the drive have the files, i know because it is still full, but the thing is i cannot see any of my files. I tried checking for errors and some basic data recovery which didn't worked at all Any help?

    Read the article

  • Removing extra commas in CSV without another data source

    - by fi-no
    We have a large database with customer addresses that was exported from an SQL database to CSV. In the event that a company has a comma in their name, it (predictably) throws the whole database out of whack. Unfortunately, there are so many instances of this (and commas in the second address line) that the whole CSV (~100k rows) is a huge mess. The obvious fix is to export the data again in a different, non comma reliant format, but access to that SQL database is more or less impossible at the moment... I've tried a few tools and brainstormed about combining things to fix this, but I figured asking couldn't hurt. Thanks!

    Read the article

  • Preventing an Apache 2 Server from Logging Sensitive Data

    - by jstr
    Apache 2 by default logs the entire request URI including query string of every request. What is a straight forward way to prevent an Apache 2 web server from logging sensitive data, for example passwords, credit card numbers, etc., but still log the rest of the request? I would like to log all log-in attempts including the attempted username as Apache does by default, and prevent Apache from logging the password directly. I have looked through the Apache 2 documentation and there doesn't appear to be an easy way to do this other than completely preventing logging of these requests (using SetEnvIf). How can I accomplish this?

    Read the article

  • Open source monitoring tool without sending data to "Their Server"

    - by hangu
    I trying to use open source server monitoring tool. I know there are a lot, but I couldn't find what I need.. the basic process of monitoring tool I used to use before was, 1) Install agent in my server which I want to monitor 2) The agent send data to "their server" 3) I can check the health of my server through web browser presented by them. What I need is, avoiding "Step 2". Are there any monitoring tool that I can use? I have Windows 2008 and Linux servers simple feature will be enough like (CPU, Memory, Network..) Thank you

    Read the article

  • datagrid height issue in nested datagrid( when using three data grid)

    - by prince23
    hi, i have a nested datagrid(which is of three data grid). here i am able to show data with no issue. the first datagrid has 5 rows the main problem that comes here is when you click any row in first datagrid i show 2 datagrid( which has 10 rows) lets say i click 3 row in 2 data grid. it show further records in third datagrid. again when i click 5 row in 2 data grid it show further records in third datagrid. now all the recods are shown fine when u try to collpase the 3 row in 2 data grid it collpase but the issue is the height what the third datagrid which took space for showing the records( we can see a blank space showing between the main 2 datagrid and third data grid) in every grid first column i have an button where i am writing this code for expand and collpase this is the functionality i am implementing in all the datagrid button where i do expand collpase. hope my question is clear . any help would great private void btn1_Click(object sender, RoutedEventArgs e) { try { Button btnExpandCollapse = sender as Button; Image imgScore = (Image)btnExpandCollapse.FindName("img"); DependencyObject dep = (DependencyObject)e.OriginalSource; while ((dep != null) && !(dep is DataGridRow)) { dep = VisualTreeHelper.GetParent(dep); } // if we found the clicked row if (dep != null && dep is DataGridRow) { DataGridRow row = (DataGridRow)dep; // change the details visibility if (row.DetailsVisibility == Visibility.Collapsed) { imgScore.Source = new BitmapImage(new Uri("/Images/a1.JPG", UriKind.Relative)); row.DetailsVisibility = Visibility.Visible; } else { imgScore.Source = new BitmapImage(new Uri("/Images/a2JPG", UriKind.Relative)); row.DetailsVisibility = Visibility.Collapsed; } } } catch (System.Exception) { } } --------------------------------------- 2 datagrid private void btn2_Click(object sender, RoutedEventArgs e) { try { Button btnExpandCollapse = sender as Button; Image imgScore = (Image)btnExpandCollapse.FindName("img"); DependencyObject dep = (DependencyObject)e.OriginalSource; while ((dep != null) && !(dep is DataGridRow)) { dep = VisualTreeHelper.GetParent(dep); } // if we found the clicked row if (dep != null && dep is DataGridRow) { DataGridRow row = (DataGridRow)dep; // change the details visibility if (row.DetailsVisibility == Visibility.Collapsed) { imgScore.Source = new BitmapImage(new Uri("/Images/a1.JPG", UriKind.Relative)); row.DetailsVisibility = Visibility.Visible; } else { imgScore.Source = new BitmapImage(new Uri("/Images/a2JPG", UriKind.Relative)); row.DetailsVisibility = Visibility.Collapsed; } } } catch (System.Exception) { } } ----------------- 3 datagrid private void btn1_Click(object sender, RoutedEventArgs e) { try { Button btnExpandCollapse = sender as Button; Image imgScore = (Image)btnExpandCollapse.FindName("img"); DependencyObject dep = (DependencyObject)e.OriginalSource; while ((dep != null) && !(dep is DataGridRow)) { dep = VisualTreeHelper.GetParent(dep); } // if we found the clicked row if (dep != null && dep is DataGridRow) { DataGridRow row = (DataGridRow)dep; // change the details visibility if (row.DetailsVisibility == Visibility.Collapsed) { imgScore.Source = new BitmapImage(new Uri("/Images/a1.JPG", UriKind.Relative)); row.DetailsVisibility = Visibility.Visible; } else { imgScore.Source = new BitmapImage(new Uri("/Images/a2JPG", UriKind.Relative)); row.DetailsVisibility = Visibility.Collapsed; } } } catch (System.Exception) { } }**

    Read the article

  • Tomcat 6, JPA and Data sources

    - by asrijaal
    Hi there, I'm trying to get a data source working in my jsf app. I defined the data source in my web-apps context.xml webapp/META-INF/context.xml <?xml version="1.0" encoding="UTF-8"?> <Context antiJARLocking="true" path="/Sale"> <Resource auth="Container" driverClassName="com.mysql.jdbc.Driver" maxActive="20" maxIdle="10" maxWait="-1" name="Sale" password="admin" type="javax.sql.DataSource" url="jdbc:mysql://localhost/sale" username="admin"/> </Context> web.xml <?xml version="1.0" encoding="UTF-8"?> <web-app version="2.5" xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd"> <filter> <display-name>RichFaces Filter</display-name> <filter-name>richfaces</filter-name> <filter-class>org.ajax4jsf.Filter</filter-class> </filter> <filter-mapping> <filter-name>richfaces</filter-name> <servlet-name>Faces Servlet</servlet-name> <dispatcher>REQUEST</dispatcher> <dispatcher>FORWARD</dispatcher> <dispatcher>INCLUDE</dispatcher> </filter-mapping> <servlet> <servlet-name>Faces Servlet</servlet-name> <servlet-class>javax.faces.webapp.FacesServlet</servlet-class> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>Faces Servlet</servlet-name> <url-pattern>/faces/*</url-pattern> </servlet-mapping> <session-config> <session-timeout> 30 </session-timeout> </session-config> <welcome-file-list> <welcome-file>faces/welcomeJSF.jsp</welcome-file> </welcome-file-list> <context-param> <param-name>org.richfaces.SKIN</param-name> <param-value>ruby</param-value> </context-param> </web-app> and my persistence.xml <?xml version="1.0" encoding="UTF-8"?> <persistence version="1.0" xmlns="http://java.sun.com/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_1_0.xsd"> <persistence-unit name="SalePU" transaction-type="RESOURCE_LOCAL"> <provider>oracle.toplink.essentials.PersistenceProvider</provider> <non-jta-data-source>Sale</non-jta-data-source> <class>org.comp.sale.AnfrageAnhang</class> <class>org.comp.sale.Beschaffung</class> <class>org.comp.sale.Konto</class> <class>org.comp.sale.Anfrage</class> <exclude-unlisted-classes>false</exclude-unlisted-classes> </persistence-unit> </persistence> But no data source seems to be created by Tomcat, I only get this exception Exception [TOPLINK-7060] (Oracle TopLink Essentials - 2.0.1 (Build b09d-fcs (12/06/2007))): oracle.toplink.essentials.exceptions.ValidationException Exception Description: Cannot acquire data source [Sale]. Internal Exception: javax.naming.NameNotFoundException: Name Sale is not bound in this Context The needed jars for the MySQL driver are included into the WEB-INF/lib dir. What I'm doing wrong?

    Read the article

  • Spring.Data.NHibernate12:::Application not closing database connection(Getting max connection pool

    - by anupam3m
    Even after successful transaction.Application connection with the database persist.in Nhibernate log it shows Nhibernate Log 2010-05-21 14:45:08,428 [Worker] [0] DEBUG NHibernate.Impl.SessionImpl [(null)] <(null) - executing flush 2010-05-21 14:45:08,428 [Worker] [0] DEBUG NHibernate.Impl.ConnectionManager [(null)] < (null) - registering flush begin 2010-05-21 14:45:08,428 [Worker] [0] DEBUG NHibernate.Impl.ConnectionManager [(null)] < (null) - registering flush end 2010-05-21 14:45:08,428 [Worker] [0] DEBUG NHibernate.Impl.SessionImpl [(null)] <(null) - post flush 2010-05-21 14:45:08,428 [Worker] [0] DEBUG NHibernate.Impl.SessionImpl [(null)] <(null) - before transaction completion 2010-05-21 14:45:08,428 [Worker] [0] DEBUG NHibernate.Impl.ConnectionManager [(null)] < (null) - aggressively releasing database connection 2010-05-21 14:45:08,428 [Worker] [0] DEBUG NHibernate.Connection.ConnectionProvider [(null)] <(null) - Closing connection 2010-05-21 14:45:08,428 [Worker] [0] DEBUG NHibernate.Impl.SessionImpl [(null)] <(null) - transaction completion 2010-05-21 14:45:08,428 [Worker] [0] DEBUG NHibernate.Transaction.AdoTransaction [(null)] < (null) - running AdoTransaction.Dispose() 2010-05-21 14:45:08,428 [Worker] [0] DEBUG NHibernate.Impl.SessionImpl [(null)] <(null) - closing session 2010-05-21 14:45:08,428 [Worker] [0] DEBUG NHibernate.Impl.BatcherImpl [(null)] <(null) - running BatcherImpl.Dispose(true) Underneath given is my dataconfiguration file < ?xml version="1.0" encoding="utf-8" ? < objects xmlns="http://www.springframework.net" xmlns:db="http://www.springframework.net/database" xmlns:tx="http://www.springframework.net/tx"> <property name="CacheSettings" ref="CacheSettings"/> type="Risco.Rsp.Ac.AMAC.CacheMgmt.Utilities.UpdateEntityCacheHelper, Risco.Rsp.Ac.AMAC.CacheMgmt.Utilities" singleton="false"/ < object type="Spring.Objects.Factory.Config.PropertyPlaceholderConfigurer, Spring.Core" < property name="ConfigSections" value="databaseSettings"/ < db:provider id="AMACDbProvider" provider="OracleClient-2.0" connectionString="Data Source=RISCODEVDB;User ID=amacdevuser; Password=amacuser1234;"/> < object id="NHibernateSessionFactory" type="Spring.Data.NHibernate.LocalSessionFactoryObject,Spring.Data.NHibernate12" < property name="DbProvider" ref="AMACDbProvider"/ <value> Risco.Rsp.Ac.AMAC.CacheMappings</value> </property> <dictionary> < entry key="hibernate.connection.provider" value="NHibernate.Connection.DriverConnectionProvider" /> <entry key="hibernate.dialect" value="NHibernate.Dialect.Oracle9Dialect"/ value="NHibernate.Driver.OracleClientDriver"/ singleton="false" <property name="SessionFactory" ref="NHibernateSessionFactory" /> <property name="TemplateFlushMode" value="Auto" /> <property name="CacheQueries" value="true" /> <property name="EntityInterceptor" ref="AuditLogger"/> type="Spring.Data.NHibernate.HibernateTransactionManager, >Spring.Data.NHibernate12"> <property name="DbProvider" ref="AMACDbProvider"/> <property name="SessionFactory" ref="NHibernateSessionFactory"/> <property name="EntityInterceptor" ref="AuditLogger"/> type="Spring.Transaction.Interceptor.TransactionProxyFactoryObject,Spring.Data" <property name="PlatformTransactionManager" ref="transactionManager"/> <property name="Target" ref="EventPubSubDAO"/> <property name="TransactionAttributes"> <name-values> <add key="Save*" value="PROPAGATION_REQUIRES_NEW"/> <add key="Delete*" value="PROPAGATION_REQUIRED"/> </name-values> </property> type="Risco.Rsp.Ac.AMAC.DAO.EventPubSubMgmt.EventPubSubDAO, Risco.Rsp.Ac.AMAC.DAO.EventPubSubMgmt" < /object < tx:attribute-driven/ < /objects Please help me out with this issue.Thanks

    Read the article

  • Reading XML File

    - by Joy
    I'm developping an iphone application which uses google weather api to forecast the weather. The webservice is giving me data in the following format:- <?xml version="1.0"?> <xml_api_reply version="1"> <weather module_id="0" tab_id="0" mobile_row="0" mobile_zipped="1" row="0" section="0" > <forecast_information> <city data="Kolkata, West Bengal"/> <postal_code data="Kolkata"/> <latitude_e6 data=""/> <longitude_e6 data=""/> <forecast_date data="2010-04-28"/> <current_date_time data="2010-04-28 10:20:00 +0000"/> <unit_system data="US"/> </forecast_information> <current_conditions ><condition data="Haze"/> <temp_f data="97"/> <temp_c data="36"/> <humidity data="Humidity: 53%"/> <icon data="/ig/images/weather/haze.gif"/> <wind_condition data="Wind: S at 12 mph"/> </current_conditions> <forecast_conditions> <day_of_week data="Wed"/> <low data="82"/> <high data="91"/> <icon data="/ig/images/weather/chance_of_rain.gif"/> <condition data="Chance of Rain"/> </forecast_conditions> <forecast_conditions> <day_of_week data="Thu"/> <low data="82"/> <high data="96"/> <icon data="/ig/images/weather/rain.gif"/> <condition data="Rain"/> </forecast_conditions> <forecast_conditions> <day_of_week data="Fri"/> <low data="82"/> <high data="96"/> <icon data="/ig/images/weather/sunny.gif"/> <condition data="Clear"/> </forecast_conditions> <forecast_conditions> <day_of_week data="Sat"/> <low data="78"/> <high data="98"/> <icon data="/ig/images/weather/mostly_sunny.gif"/> <condition data="Mostly Sunny"/> </forecast_conditions> </weather> As I'm new to iPhone development so i'm facing problem while reading this using xmlparser. Please help me out of this problem. Looking forward to your valuable reply. Thanks in advance..

    Read the article

  • simplejson double escapes data causing invalid JSON string

    - by mike_hornbeck
    I have a simple form for managing manufacturers in my shop. After posting form, ajax call returns json with updated data to the form. Problem is, that the returned string is invalid. It looks like it was double-escaped. Strangely similar approach across the whole shop works without any problems. I'm also using jquery 1.6 as javascript framework. Model contains of 3 fields : char for name, text for description and image field for manufacturer logo. The function : def update_data(request, manufacturer_id): """Updates data of manufacturer with given manufacturer id. """ manufacturer = Manufacturer.objects.get(pk=manufacturer_id) form = ManufacturerDataForm(request.FILES, request.POST, instance=manufacturer) if form.is_valid(): form.save() msg = _(u"Manufacturer data has been saved.") html = [ ["#data", manufacturer_data_inline(request, manufacturer_id, form)], ["#selectable-factories-inline", selectable_manufacturers_inline(request, manufacturer_id)], ] result = simplejson.dumps({ "html": html }, cls=LazyEncoder) return HttpResponse(result) The error in console : error with invalid JSON : uncaught exception: Invalid JSON: {"html": [["#data", "\n<h2>Dane</h2>\n<div class="\&quot;manufacturer-image\&quot;">\n \n</div>\n<form action="\&quot;/manage/update-manufacturer-data/1\&quot;" method="\&quot;post\&quot;">\n \n <div class="\&quot;field\&quot;">\n <div class="\&quot;label\&quot;">\n <label for="\&quot;id_name\&quot;">Nazwa</label>:\n </div>\n \n \n <div class="\&quot;error\&quot;">\n <input id="\&quot;id_name\&quot;" name="\&quot;name\&quot;" maxlength="\&quot;50\&quot;" type="\&quot;text\&quot;">\n <ul class="\&quot;errorlist\&quot;"><li>Pole wymagane</li></ul>\n </div>\n \n </div>\n\n <div class="\&quot;field\&quot;">\n <div class="\&quot;label\&quot;">\n <label for="\&quot;id_image\&quot;">Zdjecie</label>:\n </div>\n \n \n <div>\n <input name="\&quot;image\&quot;" id="\&quot;id_image\&quot;" type="\&quot;file\&quot;">\n </div>\n \n </div>\n\n <div class="\&quot;field\&quot;">\n <div class="\&quot;label\&quot;">\n <label for="\&quot;id_description\&quot;">Opis</label>:\n </div>\n \n \n <div>\n <textarea id="\&quot;id_description\&quot;" rows="\&quot;10\&quot;" cols="\&quot;40\&quot;" name="\&quot;description\&quot;"></textarea>\n </div>\n \n </div>\n \n <div class="\&quot;buttons\&quot;">\n <input class="\&quot;ajax-save-button" button\"="" type="\&quot;submit\&quot;">\n </div>\n</form>"], ["#selectable-factories-inline", "\n <div>\n <a class="\&quot;selectable" selected\"\n="" href="%5C%22/manage/manufacturer/1%5C%22">\n L1\n </a>\n </div>\n\n <div>\n <a class="\&quot;selectable" \"\n="" href="%5C%22/manage/manufacturer/4%5C%22">\n KR3W\n </a>\n </div>\n\n <div>\n <a class="\&quot;selectable" \"\n="" href="%5C%22/manage/manufacturer/3%5C%22">\n L1TA\n </a>\n </div>\n\n"]]} Any ideas ?

    Read the article

  • SQL SERVER – Import CSV into Database – Transferring File Content into a Database Table using CSVexpress

    - by pinaldave
    One of the most common data integration tasks I run into is a desire to move data from a file into a database table.  Generally the user is familiar with his data, the structure of the file, and the database table, but is unfamiliar with data integration tools and therefore views this task as something that is difficult.  What these users really need is a point and click approach that minimizes the learning curve for the data integration tool.  This is what CSVexpress (www.CSVexpress.com) is all about!  It is based on expressor Studio, a data integration tool I’ve been reviewing over the last several months. With CSVexpress, moving data between data sources can be as simple as providing the database connection details, describing the structure of the incoming and outgoing data and then connecting two pre-programmed operators.   There’s no need to learn the intricacies of the data integration tool or to write code.  Let’s look at an example. Suppose I have a comma separated value data file with data similar to the following, which is a listing of terminated employees that includes their hiring and termination date, department, job description, and final salary. EMP_ID,STRT_DATE,END_DATE,JOB_ID,DEPT_ID,SALARY 102,13-JAN-93,24-JUL-98 17:00,Programmer,60,"$85,000" 101,21-SEP-89,27-OCT-93 17:00,Account Representative,110,"$65,000" 103,28-OCT-93,15-MAR-97 17:00,Account Manager,110,"$75,000" 304,17-FEB-96,19-DEC-99 17:00,Marketing,20,"$45,000" 333,24-MAR-98,31-DEC-99 17:00,Data Entry Clerk,50,"$35,000" 100,17-SEP-87,17-JUN-93 17:00,Administrative Assistant,90,"$40,000" 334,24-MAR-98,31-DEC-98 17:00,Sales Representative,80,"$40,000" 400,01-JAN-99,31-DEC-99 17:00,Sales Manager,80,"$55,000" Notice the concise format used for the date values, the fact that the termination date includes both date and time information, and that the salary is clearly identified as money by the dollar sign and digit grouping.  In moving this data to a database table I want to express the dates using a format that includes the century since it’s obvious that this listing could include employees who left the company in both the 20th and 21st centuries, and I want the salary to be stored as a decimal value without the currency symbol and grouping character.  Most data integration tools would require coding within a transformation operation to effect these changes, but not expressor Studio.  Directives for these modifications are included in the description of the incoming data. Besides starting the expressor Studio tool and opening a project, the first step is to create connection artifacts, which describe to expressor where data is stored.  For this example, two connection artifacts are required: a file connection, which encapsulates the file system location of my file; and a database connection, which encapsulates the database connection information.  With expressor Studio, I use wizards to create these artifacts. First click New Connection > File Connection in the Home tab of expressor Studio’s ribbon bar, which starts the File Connection wizard.  In the first window, I enter the path to the directory that contains the input file.  Note that the file connection artifact only specifies the file system location, not the name of the file. Then I click Next and enter a meaningful name for this connection artifact; clicking Finish closes the wizard and saves the artifact. To create the Database Connection artifact, I must know the location of, or instance name, of the target database and have the credentials of an account with sufficient privileges to write to the target table.  To use expressor Studio’s features to the fullest, this account should also have the authority to create a table. I click the New Connection > Database Connection in the Home tab of expressor Studio’s ribbon bar, which starts the Database Connection wizard.  expressor Studio includes high-performance drivers for many relational database management systems, so I can simply make a selection from the “Supplied database drivers” drop down control.  If my desired RDBMS isn’t listed, I can optionally use an existing ODBC DSN by selecting the “Existing DSN” radio button. In the following window, I enter the connection details.  With Microsoft SQL Server, I may choose to use Windows Authentication rather than rather than account credentials.  After clicking Next, I enter a meaningful name for this connection artifact and clicking Finish closes the wizard and saves the artifact. Now I create a schema artifact, which describes the structure of the file data.  When expressor reads a file, all data fields are typed as strings.  In some use cases this may be exactly what is needed and there is no need to edit the schema artifact.  But in this example, editing the schema artifact will be used to specify how the data should be transformed; that is, reformat the dates to include century designations, change the employee and job ID’s to integers, and convert the salary to a decimal value. Again a wizard is used to create the schema artifact.  I click New Schema > Delimited Schema in the Home tab of expressor Studio’s ribbon bar, which starts the Database Connection wizard.  In the first window, I click Get Data from File, which then displays a listing of the file connections in the project.  When I click on the file connection I previously created, a browse window opens to this file system location; I then select the file and click Open, which imports 10 lines from the file into the wizard. I now view the file’s content and confirm that the appropriate delimiter characters are selected in the “Field Delimiter” and “Record Delimiter” drop down controls; then I click Next. Since the input file includes a header row, I can easily indicate that fields in the file should be identified through the corresponding header value by clicking “Set All Names from Selected Row. “ Alternatively, I could enter a different identifier into the Field Details > Name text box.  I click Next and enter a meaningful name for this schema artifact; clicking Finish closes the wizard and saves the artifact. Now I open the schema artifact in the schema editor.  When I first view the schema’s content, I note that the types of all attributes in the Semantic Type (the right-hand panel) are strings and that the attribute names are the same as the field names in the data file.  To change an attribute’s name and type, I highlight the attribute and click Edit in the Attributes grouping on the Schema > Edit tab of the editor’s ribbon bar.  This opens the Edit Attribute window; I can change the attribute name and select the desired type from the “Data type” drop down control.  In this example, I change the name of each attribute to the name of the corresponding database table column (EmployeeID, StartingDate, TerminationDate, JobDescription, DepartmentID, and FinalSalary).  Then for the EmployeeID and DepartmentID attributes, I select Integer as the data type, for the StartingDate and TerminationDate attributes, I select Datetime as the data type, and for the FinalSalary attribute, I select the Decimal type. But I can do much more in the schema editor.  For the datetime attributes, I can set a constraint that ensures that the data adheres to some predetermined specifications; a starting date must be later than January 1, 1980 (the date on which the company began operations) and a termination date must be earlier than 11:59 PM on December 31, 1999.  I simply select the appropriate constraint and enter the value (1980-01-01 00:00 as the starting date and 1999-12-31 11:59 as the termination date). As a last step in setting up these datetime conversions, I edit the mapping, describing the format of each datetime type in the source file. I highlight the mapping line for the StartingDate attribute and click Edit Mapping in the Mappings grouping on the Schema > Edit tab of the editor’s ribbon bar.  This opens the Edit Mapping window in which I either enter, or select, a format that describes how the datetime values are represented in the file.  Note the use of Y01 as the syntax for the year.  This syntax is the indicator to expressor Studio to derive the century by setting any year later than 01 to the 20th century and any year before 01 to the 21st century.  As each datetime value is read from the file, the year values are transformed into century and year values. For the TerminationDate attribute, my format also indicates that the datetime value includes hours and minutes. And now to the Salary attribute. I open its mapping and in the Edit Mapping window select the Currency tab and the “Use currency” check box.  This indicates that the file data will include the dollar sign (or in Europe the Pound or Euro sign), which should be removed. And on the Grouping tab, I select the “Use grouping” checkbox and enter 3 into the “Group size” text box, a comma into the “Grouping character” text box, and a decimal point into the “Decimal separator” character text box. These entries allow the string to be properly converted into a decimal value. By making these entries into the schema that describes my input file, I’ve specified how I want the data transformed prior to writing to the database table and completely removed the requirement for coding within the data integration application itself. Assembling the data integration application is simple.  Onto the canvas I drag the Read File and Write Table operators, connecting the output of the Read File operator to the input of the Write Table operator. Next, I select the Read File operator and its Properties panel opens on the right-hand side of expressor Studio.  For each property, I can select an appropriate entry from the corresponding drop down control.  Clicking on the button to the right of the “File name” text box opens the file system location specified in the file connection artifact, allowing me to select the appropriate input file.  I indicate also that the first row in the file, the header row, should be skipped, and that any record that fails one of the datetime constraints should be skipped. I then select the Write Table operator and in its Properties panel specify the database connection, normal for the “Mode,” and the “Truncate” and “Create Missing Table” options.  If my target table does not yet exist, expressor will create the table using the information encapsulated in the schema artifact assigned to the operator. The last task needed to complete the application is to create the schema artifact used by the Write Table operator.  This is extremely easy as another wizard is capable of using the schema artifact assigned to the Read Table operator to create a schema artifact for the Write Table operator.  In the Write Table Properties panel, I click the drop down control to the right of the “Schema” property and select “New Table Schema from Upstream Output…” from the drop down menu. The wizard first displays the table description and in its second screen asks me to select the database connection artifact that specifies the RDBMS in which the target table will exist.  The wizard then connects to the RDBMS and retrieves a list of database schemas from which I make a selection.  The fourth screen gives me the opportunity to fine tune the table’s description.  In this example, I set the width of the JobDescription column to a maximum of 40 characters and select money as the type of the LastSalary column.  I also provide the name for the table. This completes development of the application.  The entire application was created through the use of wizards and the required data transformations specified through simple constraints and specifications rather than through coding.  To develop this application, I only needed a basic understanding of expressor Studio, a level of expertise that can be gained by working through a few introductory tutorials.  expressor Studio is as close to a point and click data integration tool as one could want and I urge you to try this product if you have a need to move data between files or from files to database tables. Check out CSVexpress in more detail.  It offers a few basic video tutorials and a preview of expressor Studio 3.5, which will support the reading and writing of data into Salesforce.com. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • Microsoft Silverlight 4 Data and Services Cookbook &ndash; Book Review (sort of)

    - by Jim Duffy
    I just received my copy of the Microsoft Silverlight 4 Data and Services Cookbook, co-authored by fellow Microsoft Regional Director, Gill Cleeren, and at first glance I like what I see. I’ve always been a fan of the “cookbook” approach to technical books because they are problem/solution oriented. Often developers need solutions to solve specific questions like “how do I send email from within my .NET application” and so on, and yes, that was a blatant plug to my article explaining how to accomplish just that, but I digress. :-) I also enjoy the cookbook approach because you can just start flipping pages and randomly stop somewhere and see what nugget of information is staring up at you from the page. Anyway, what I like about this book is that it focuses on a specific area of Silverlight development, accessing data and services.  The book is broken down into the following chapters: Chapter 1: Learning the Nuts and Bolts of Silverlight 4 Chapter 2: An Introduction to Data Binding Chapter 3: Advanced Data Binding Chapter 4: The Data Grid Chapter 5: The DataForm Chapter 6: Talking to Services Chapter 7: Talking to WCF and ASMX Services Chapter 8: Talking to REST and WCF Data Services Chapter 9: Talking to WCF RIA Services Chapter 10: Converting Your Existing Applications to Use Silverlight As you can see this book is all about working with Silverlight 4 and data. I’m looking forward to taking a closer look at it. Have a day. :-|

    Read the article

  • What strategy should be employed to access Facebook data offline?

    - by user686021
    I'm working on a project similar to Klout which provides detail about how you influence other people and who influenced you. We'll be fetching data from few social networking sites (i.e linked in, facebook, twitter etc) to analyze how users interacts with one another. For that we need to parse the data and store it in db and have to analyze it so that strength of relation of two user can be decided. We'll be accessing data offline as well to provide them with accurate results. If we consider facebook activities, we need to have access to Facebook users' news feed, wall data which includes likes,comments,shares etc. To decide how one user influence other, we'll store all the data and analyze it. I need suggestions on what steps need to be taken for great performance. We'll be using ASP.Net(C#) Web forms, SQL Server, jQuery. Main concern is parsing of data, it's storage and retrieval with least overhead. For that I've summarized few points as below : Should we switch over to document-oriented database, like MongoDB or RavenDB for the whole app or part of it even though none of team member have experience with them? Should we use SQL Server Analysis service? Is there any other library than Json.NET for parsing data? Is it advisable to use any C# library over FQL + GET Request ? I've tried to provide as much info as possible. Please share your views for the same.

    Read the article

  • Is application-specific data required for good unit testing?

    - by stinkycheeseman
    I am writing unit tests for a fairly simple function that depends on a fairly complicated set of data. Essentially, the object I am manipulating represents a graph and this function determines whether to chart a line, bar, or pie chart based on the data that came back from the server. This is a simplified version, using jQuery: setDefaultChartType: function (graphObject) { var prop1 = graphObject.properties.key; var numCols = 0; $.each(graphObject.columns, function (colIndex, column) { numCols++; }); if ( numCols > 6 || ( prop1 > 1 && graphObject.data.length == 1) ) { graphObject.setChartType("line"); } else if ( numCols <=6 && prop1 == 1 ) { graphObject.setChartType("bar"); } else if ( numCols <=6 && prop1 > 1 ) { graphObject.setChartType("pie"); } } My question is, should I use mock data that is procured from the actual database? Or can I just fabricate data that fits the different cases? I'm afraid that fabricating data will not expose bugs arising from changes in the database, but on the other hand, it would require a lot more effort to keep the test data up-to-date that I'm not sure is necessary.

    Read the article

  • Wednesday at Oracle OpenWorld 2012 - Must See Session: “Event-Driven Patterns and Best Practices: Even More Important with Big Data”

    - by Lionel Dubreuil
    Don’t miss this “CON8636 - Event-Driven Patterns and Best Practices: Even More Important with Big Data“ session: Speakers: Faisal Nazir - Senior Solutions Architect, Motorola Shinichiro Takahashi - Senior Manager, Service Platform Department, NTT DOCOMO, INC. Robin Smith - Product Management/Strategy Director - Oracle Event Processing, Oracle Date: Wednesday, Oct 3 Time: 10:15 AM - 11:15 AM Location: Moscone South - 310 As the demand for big data analytics and integration grows across all industries, this session focuses on the role of the Oracle event-driven solution platform in delivering vital real-time integrated analysis intelligence to the data streams consumed and emitted from these large distributed data stores. Objectives for this session are to: Increase awareness of Oracle Event Processing, showcasing tight alignment with big data solutions Highlight emerging usage patterns in relation to streaming event data and distributed data stores Show a significant Oracle competitive advantage over IBM solutions advertised in this domain Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";}

    Read the article

  • Dynamic DataGrid columns in WPF DataGrid based on the underlying set of data (and their type)

    - by StatsMan
    Hello everyone, I've got kind of a conceptual question. I am in the process of wrapping some statistics classes I wrote into WPF. For that I have two DataGrid(-Views, currently in WinForms). In one DataGrid each row represents a column in the other. There I can set-up different variables (as in mathematical/statistical variables) with fields like "Header", "DataType", "ValidationBehaviour", "DisplayType". There I can also set-up how it should be displayed. Some Columns can automatically be set to ComboBoxColumns, some TextBoxColumns, and so on and so forth. So, now once I've set-up these Columns I can go to the other grid and enter my data. I may, for instance, have generated (in grid 1) one Column called "Annual Gross Salary" with input of numerical values. Another Column called "Education" with "0=NoEducation", "1=College Level", "3=Universitary" etc. These labels are displayed as text in the combobox and my statistics engine behind then selects the respective value (0-3) for calculations (i.e. ordinal, nominal variables). Sooo. In WinForms I could basically generate all the columns by hand in code and then add my data in the respective cells/rows. Now in WPF I thought that must be easy to realise. However, yesterday I got started with ICustomPropertyDescriptor which (maybe I was too thick) didn't give me the results I was looking for. Basically, I just need to be able to dynamically generate columns (and rows) with different Layout, Controls (ComboBox, simple Input, DateTimes) based on the data that I have. But I don't really know how to go about it? So here in summary: DataGrid 1 Purpose is to display columns that have been specified in DataGrid 2 In rows, the user can add any kind of data in the rows below the columns that is allowed as to the columns specifications DataGrid 2 Each row in this grid represents a column in DataGrid 1 Contains fields like Name/Header, DataType, Validation Behaviour, Default Value, Data Formatting, etc. Also contains a function to be able to set-up how it should be displayed. The user can select from, for instance, ComboBoxColumn (and also add the available options), DateTime, normal TextBox, CheckBox etc. After finishing adding a row it will automatically appear as a new column in DataGrid 1 I'd appreciate any kind of pointer into the right direction. Thanks very, very much in advance! :)

    Read the article

  • JsTree v1.0 - How to manipulate effectively the data from the backend to render the trees and operate correctly?

    - by Jean Paul
    Backend info: PHP 5 / MySQL URL: http://github.com/downloads/vakata/jstree/jstree_pre1.0_fix_1.zip Table structure for table discussions_tree -- CREATE TABLE IF NOT EXISTS `discussions_tree` ( `id` int(11) NOT NULL AUTO_INCREMENT, `parent_id` int(11) NOT NULL DEFAULT '0', `user_id` int(11) NOT NULL DEFAULT '0', `label` varchar(16) DEFAULT NULL, `position` bigint(20) unsigned NOT NULL DEFAULT '0', `left` bigint(20) unsigned NOT NULL DEFAULT '0', `right` bigint(20) unsigned NOT NULL DEFAULT '0', `level` bigint(20) unsigned NOT NULL DEFAULT '0', `type` varchar(255) CHARACTER SET utf8 COLLATE utf8_unicode_ci DEFAULT NULL, `h_label` varchar(16) NOT NULL DEFAULT '', `fulllabel` varchar(255) DEFAULT NULL, UNIQUE KEY `uidx_3` (`id`), KEY `idx_1` (`user_id`), KEY `idx_2` (`parent_id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=8 ; /*The first element should in my understanding not even be shown*/ INSERT INTO `discussions_tree` (`id`, `parent_id`, `user_id`, `label`, `position`, `left`, `right`, `level`, `type`, `h_label`, `fulllabel`) VALUES (0, 0, 0, 'Contacts', 0, 1, 1, 0, NULL, '', NULL); INSERT INTO `discussions_tree` (`id`, `parent_id`, `user_id`, `label`, `position`, `left`, `right`, `level`, `type`, `h_label`, `fulllabel`) VALUES (1, 0, 0, 'How to Tag', 1, 2, 2, 0, 'drive', '', NULL); Front End : I've simplified the logic, it has 6 trees actually inside of a panel and that works fine $array = array("Discussions"); $id_arr = array("d"); $nid = 0; foreach ($array as $k=> $value) { $nid++; ?> <li id="<?=$value?>" class="label"> <a href='#<?=$value?>'><span> <?=$value?> </span></a> <div class="sub-menu" style="height:auto; min-height:120px; background-color:#E5E5E5" > <div class="menu" id="menu_<?=$id_arr[$k]?>" style="position:relative; margin-left:56%"> <img src="./js/jsTree/create.png" alt="" id="create" title="Create" > <img src="./js/jsTree/rename.png" alt="" id="rename" title="Rename" > <img src="./js/jsTree/remove.png" alt="" id="remove" title="Delete"> <img src="./js/jsTree/cut.png" alt="" id="cut" title="Cut" > <img src="./js/jsTree/copy.png" alt="" id="copy" title="Copy"> <img src="./js/jsTree/paste.png" alt="" id="paste" title="Paste"> </div> <div id="<?=$id_arr[$k]?>" class="jstree_container"></div> </div> </li> <!-- JavaScript neccessary for this tree : <?=$value?> --> <script type="text/javascript" > jQuery(function ($) { $("#<?=$id_arr[$k]?>").jstree({ // List of active plugins used "plugins" : [ "themes", "json_data", "ui", "crrm" , "hotkeys" , "types" , "dnd", "contextmenu"], // "ui" :{ "initially_select" : ["#node_"+ $nid ] } , "crrm": { "move": { "always_copy": "multitree" }, "input_width_limit":128 }, "core":{ "strings":{ "new_node" : "New Tag" }}, "themes": {"theme": "classic"}, "json_data" : { "ajax" : { "url" : "./js/jsTree/server-<?=$id_arr[$k]?>.php", "data" : function (n) { // the result is fed to the AJAX request `data` option return { "operation" : "get_children", "id" : n.attr ? n.attr("id").replace("node_","") : 1, "state" : "", "user_id": <?=$uid?> }; } } } , "types" : { "max_depth" : -1, "max_children" : -1, "types" : { // The default type "default" : { "hover_node":true, "valid_children" : [ "default" ], }, // The `drive` nodes "drive" : { // can have files and folders inside, but NOT other `drive` nodes "valid_children" : [ "default", "folder" ], "hover_node":true, "icon" : { "image" : "./js/jsTree/root.png" }, // those prevent the functions with the same name to be used on `drive` nodes.. internally the `before` event is used "start_drag" : false, "move_node" : false, "remove_node" : false } } }, "contextmenu" : { "items" : customMenu , "select_node": true} }) //Hover function binded to jstree .bind("hover_node.jstree", function (e, data) { $('ul li[rel="drive"], ul li[rel="default"], ul li[rel=""]').each(function(i) { $(this).find("a").attr('href', $(this).attr("id")+".php" ); }) }) //Create function binded to jstree .bind("create.jstree", function (e, data) { $.post( "./js/jsTree/server-<?=$id_arr[$k]?>.php", { "operation" : "create_node", "id" : data.rslt.parent.attr("id").replace("node_",""), "position" : data.rslt.position, "label" : data.rslt.name, "href" : data.rslt.obj.attr("href"), "type" : data.rslt.obj.attr("rel"), "user_id": <?=$uid?> }, function (r) { if(r.status) { $(data.rslt.obj).attr("id", "node_" + r.id); } else { $.jstree.rollback(data.rlbk); } } ); }) //Remove operation .bind("remove.jstree", function (e, data) { data.rslt.obj.each(function () { $.ajax({ async : false, type: 'POST', url: "./js/jsTree/server-<?=$id_arr[$k]?>.php", data : { "operation" : "remove_node", "id" : this.id.replace("node_",""), "user_id": <?=$uid?> }, success : function (r) { if(!r.status) { data.inst.refresh(); } } }); }); }) //Rename operation .bind("rename.jstree", function (e, data) { data.rslt.obj.each(function () { $.ajax({ async : true, type: 'POST', url: "./js/jsTree/server-<?=$id_arr[$k]?>.php", data : { "operation" : "rename_node", "id" : this.id.replace("node_",""), "label" : data.rslt.new_name, "user_id": <?=$uid?> }, success : function (r) { if(!r.status) { data.inst.refresh(); } } }); }); }) //Move operation .bind("move_node.jstree", function (e, data) { data.rslt.o.each(function (i) { $.ajax({ async : false, type: 'POST', url: "./js/jsTree/server-<?=$id_arr[$k]?>.php", data : { "operation" : "move_node", "id" : $(this).attr("id").replace("node_",""), "ref" : data.rslt.cr === -1 ? 1 : data.rslt.np.attr("id").replace("node_",""), "position" : data.rslt.cp + i, "label" : data.rslt.name, "copy" : data.rslt.cy ? 1 : 0, "user_id": <?=$uid?> }, success : function (r) { if(!r.status) { $.jstree.rollback(data.rlbk); } else { $(data.rslt.oc).attr("id", "node_" + r.id); if(data.rslt.cy && $(data.rslt.oc).children("UL").length) { data.inst.refresh(data.inst._get_parent(data.rslt.oc)); } } } }); }); }); // This is for the context menu to bind with operations on the right clicked node function customMenu(node) { // The default set of all items var control; var items = { createItem: { label: "Create", action: function (node) { return {createItem: this.create(node) }; } }, renameItem: { label: "Rename", action: function (node) { return {renameItem: this.rename(node) }; } }, deleteItem: { label: "Delete", action: function (node) { return {deleteItem: this.remove(node) }; }, "separator_after": true }, copyItem: { label: "Copy", action: function (node) { $(node).addClass("copy"); return {copyItem: this.copy(node) }; } }, cutItem: { label: "Cut", action: function (node) { $(node).addClass("cut"); return {cutItem: this.cut(node) }; } }, pasteItem: { label: "Paste", action: function (node) { $(node).addClass("paste"); return {pasteItem: this.paste(node) }; } } }; // We go over all the selected items as the context menu only takes action on the one that is right clicked $.jstree._reference("#<?=$id_arr[$k]?>").get_selected(false, true).each(function(index,element) { if ( $(element).attr("id") != $(node).attr("id") ) { // Let's deselect all nodes that are unrelated to the context menu -- selected but are not the one right clicked $("#<?=$id_arr[$k]?>").jstree("deselect_node", '#'+$(element).attr("id") ); } }); //if any previous click has the class for copy or cut $("#<?=$id_arr[$k]?>").find("li").each(function(index,element) { if ($(element) != $(node) ) { if( $(element).hasClass("copy") || $(element).hasClass("cut") ) control=1; } else if( $(node).hasClass("cut") || $(node).hasClass("copy")) { control=0; } }); //only remove the class for cut or copy if the current operation is to paste if($(node).hasClass("paste") ) { control=0; // Let's loop through all elements and try to find if the paste operation was done already $("#<?=$id_arr[$k]?>").find("li").each(function(index,element) { if( $(element).hasClass("copy") ) $(this).removeClass("copy"); if ( $(element).hasClass("cut") ) $(this).removeClass("cut"); if ( $(element).hasClass("paste") ) $(this).removeClass("paste"); }); } switch (control) { //Remove the paste item from the context menu case 0: switch ($(node).attr("rel")) { case "drive": delete items.renameItem; delete items.deleteItem; delete items.cutItem; delete items.copyItem; delete items.pasteItem; break; case "default": delete items.pasteItem; break; } break; //Remove the paste item from the context menu only on the node that has either copy or cut added class case 1: if( $(node).hasClass("cut") || $(node).hasClass("copy") ) { switch ($(node).attr("rel")) { case "drive": delete items.renameItem; delete items.deleteItem; delete items.cutItem; delete items.copyItem; delete items.pasteItem; break; case "default": delete items.pasteItem; break; } } else //Re-enable it on the clicked node that does not have the cut or copy class { switch ($(node).attr("rel")) { case "drive": delete items.renameItem; delete items.deleteItem; delete items.cutItem; delete items.copyItem; break; } } break; //initial state don't show the paste option on any node default: switch ($(node).attr("rel")) { case "drive": delete items.renameItem; delete items.deleteItem; delete items.cutItem; delete items.copyItem; delete items.pasteItem; break; case "default": delete items.pasteItem; break; } break; } return items; } $("#menu_<?=$id_arr[$k]?> img").hover( function () { $(this).css({'cursor':'pointer','outline':'1px double teal'}) }, function () { $(this).css({'cursor':'none','outline':'1px groove transparent'}) } ); $("#menu_<?=$id_arr[$k]?> img").click(function () { switch(this.id) { //Create only the first element case "create": if ( $.jstree._reference("#<?=$id_arr[$k]?>").get_selected(false, true).length ) { $.jstree._reference("#<?=$id_arr[$k]?>").get_selected(false, true).each(function(index,element){ switch(index) { case 0: $("#<?=$id_arr[$k]?>").jstree("create", '#'+$(element).attr("id"), null, /*{attr : {href: '#' }}*/null ,null, false); break; default: $("#<?=$id_arr[$k]?>").jstree("deselect_node", '#'+$(element).attr("id") ); break; } }); } else { $.facebox('<p class=\'p_inner error bold\'>A selection needs to be made to work with this operation'); setTimeout(function(){ $.facebox.close(); }, 2000); } break; //REMOVE case "remove": if ( $.jstree._reference("#<?=$id_arr[$k]?>").get_selected(false, true).length ) { $.jstree._reference("#<?=$id_arr[$k]?>").get_selected(false, true).each(function(index,element){ //only execute if the current node is not the first one (drive) if( $(element).attr("id") != $("div.jstree > ul > li").first().attr("id") ) { $("#<?=$id_arr[$k]?>").jstree("remove",'#'+$(element).attr("id")); } else $("#<?=$id_arr[$k]?>").jstree("deselect_node", '#'+$(element).attr("id") ); }); } else { $.facebox('<p class=\'p_inner error bold\'>A selection needs to be made to work with this operation'); setTimeout(function(){ $.facebox.close(); }, 2000); } break; //RENAME NODE only one selection case "rename": if ( $.jstree._reference("#<?=$id_arr[$k]?>").get_selected(false, true).length ) { $.jstree._reference("#<?=$id_arr[$k]?>").get_selected(false, true).each(function(index,element){ if( $(element).attr("id") != $("div.jstree > ul > li").first().attr("id") ) { switch(index) { case 0: $("#<?=$id_arr[$k]?>").jstree("rename", '#'+$(element).attr("id") ); break; default: $("#<?=$id_arr[$k]?>").jstree("deselect_node", '#'+$(element).attr("id") ); break; } } else $("#<?=$id_arr[$k]?>").jstree("deselect_node", '#'+$(element).attr("id") ); }); } else { $.facebox('<p class=\'p_inner error bold\'>A selection needs to be made to work with this operation'); setTimeout(function(){ $.facebox.close(); }, 2000); } break; //Cut case "cut": if ( $.jstree._reference("#<?=$id_arr[$k]?>").get_selected(false, true).length ) { $.jstree._reference("#<?=$id_arr[$k]?>").get_selected(false, true).each(function(index,element){ switch(index) { case 0: $("#<?=$id_arr[$k]?>").jstree("cut", '#'+$(element).attr("id")); $.facebox('<p class=\'p_inner teal\'>Operation "Cut" successfully done.<p class=\'p_inner teal bold\'>Where to place it?'); setTimeout(function(){ $.facebox.close(); $("#<?=$id_arr[$k]?>").jstree("deselect_node", '#'+$(element).attr("id")); }, 2000); break; default: $("#<?=$id_arr[$k]?>").jstree("deselect_node", '#'+$(element).attr("id") ); break; } }); } else { $.facebox('<p class=\'p_inner error bold\'>A selection needs to be made to work with this operation'); setTimeout(function(){ $.facebox.close(); }, 2000); } break; //Copy case "copy": if ( $.jstree._reference("#<?=$id_arr[$k]?>").get_selected(false, true).length ) { $.jstree._reference("#<?=$id_arr[$k]?>").get_selected(false, true).each(function(index,element){ switch(index) { case 0: $("#<?=$id_arr[$k]?>").jstree("copy", '#'+$(element).attr("id")); $.facebox('<p class=\'p_inner teal\'>Operation "Copy": Successfully done.<p class=\'p_inner teal bold\'>Where to place it?'); setTimeout(function(){ $.facebox.close(); $("#<?=$id_arr[$k]?>").jstree("deselect_node", '#'+$(element).attr("id") ); }, 2000); break; default: $("#<?=$id_arr[$k]?>").jstree("deselect_node", '#'+$(element).attr("id") ); break; } }); } else { $.facebox('<p class=\'p_inner error bold\'>A selection needs to be made to work with this operation'); setTimeout(function(){ $.facebox.close(); }, 2000); } break; case "paste": if ( $.jstree._reference("#<?=$id_arr[$k]?>").get_selected(false, true).length ) { $.jstree._reference("#<?=$id_arr[$k]?>").get_selected(false, true).each(function(index,element){ switch(index) { case 0: $("#<?=$id_arr[$k]?>").jstree("paste", '#'+$(element).attr("id")); break; default: $("#<?=$id_arr[$k]?>").jstree("deselect_node", '#'+$(element).attr("id") ); break; } }); } else { $.facebox('<p class=\'p_inner error bold\'>A selection needs to be made to work with this operation'); setTimeout(function(){ $.facebox.close(); }, 2000); } break; } }); <? } ?> server.php $path='../../../..'; require_once "$path/phpfoo/dbif.class"; require_once "$path/global.inc"; // Database config & class $db_config = array( "servername"=> $dbHost, "username" => $dbUser, "password" => $dbPW, "database" => $dbName ); if(extension_loaded("mysqli")) require_once("_inc/class._database_i.php"); else require_once("_inc/class._database.php"); //Tree class require_once("_inc/class.ctree.php"); $dbLink = new dbif(); $dbErr = $dbLink->connect($dbName,$dbUser,$dbPW,$dbHost); $jstree = new json_tree(); if(isset($_GET["reconstruct"])) { $jstree->_reconstruct(); die(); } if(isset($_GET["analyze"])) { echo $jstree->_analyze(); die(); } $table = '`discussions_tree`'; if($_REQUEST["operation"] && strpos($_REQUEST["operation"], "_") !== 0 && method_exists($jstree, $_REQUEST["operation"])) { foreach($_REQUEST as $k => $v) { switch($k) { case 'user_id': //We are passing the user_id from the $_SESSION on each request and trying to pick up the min and max value from the table that matches the 'user_id' $sql = "SELECT max(`right`) , min(`left`) FROM $table WHERE `user_id`=$v"; //If the select does not return any value then just let it be :P if (!list($right, $left)=$dbLink->getRow($sql)) { $sql = $dbLink->dbSubmit("UPDATE $table SET `user_id`=$v WHERE `id` = 1 AND `parent_id` = 0"); $sql = $dbLink->dbSubmit("UPDATE $table SET `user_id`=$v WHERE `parent_id` = 1 AND `label`='How to Tag' "); } else { $sql = $dbLink->dbSubmit("UPDATE $table SET `user_id`=$v, `right`=$right+2 WHERE `id` = 1 AND `parent_id` = 0"); $sql = $dbLink->dbSubmit("UPDATE $table SET `user_id`=$v, `left`=$left+1, `right`=$right+1 WHERE `parent_id` = 1 AND `label`='How to Tag' "); } break; } } header("HTTP/1.0 200 OK"); header('Content-type: application/json; charset=utf-8'); header("Cache-Control: no-cache, must-revalidate"); header("Expires: Mon, 26 Jul 1997 05:00:00 GMT"); header("Pragma: no-cache"); echo $jstree->{$_REQUEST["operation"]}($_REQUEST); die(); } header("HTTP/1.0 404 Not Found"); ?> The problem: DND *(Drag and Drop) works, Delete works, Create works, Rename works, but Copy, Cut and Paste don't work

    Read the article

  • NSObject release destroys local copy of object's data

    - by Spider-Paddy
    I know this is something stupid on my part but I don't get what's happening. I create an object that fetches data & puts it into an array in a specific format, since it fetches asynchronously (has to download & parse data) I put a delegate method into the object that needs the data so that the data fetching object copies it's formatted array into an array in the calling object. The problem is that when the data fetching object is released, the copy it created in the caller is being erased, code is: In .h file @property (nonatomic, retain) NSArray *imagesDataSource; In .m file // Fetch item details ImagesParser *imagesParserObject = [[ImagesParser alloc] init:self]; [imagesParserObject getArticleImagesOfArticleId:(NSInteger)currentArticleId]; [imagesParserObject release] <-- problematic release // Called by parser when images parsing is finished -(void)imagesDataTransferComplete:(ImagesParser *)imagesParserObject { self.imagesDataSource = [ImagesParserObject.returnedArray copy]; // copy array to local variable // If there are more pics, they must be assembled in an array for possible UIImageView animation NSInteger picCount = [imagesDataSource count]; if(picCount > 1) // 1 image is assumed to be the pic already displayed { // Build image array NSMutableArray *tempPicArray = [[NSMutableArray alloc] init]; // Temp space to hold images while building for(int i = 0; i < picCount; i++) { // Get Nr from only article in detailDataSource & pic name (Small) from each item in imagesDataSource NSString *picAddress = [NSString stringWithFormat:@"http://some.url.com/shopdata/image/article/%@/%@", [[detailDataSource objectAtIndex:0] objectForKey:@"Nr"], [[imagesDataSource objectAtIndex:i] objectForKey:@"Small"]]; NSURL *picURL = [NSURL URLWithString:picAddress]; NSData *picData = [NSData dataWithContentsOfURL:picURL]; [tempPicArray addObject:[UIImage imageWithData:picData]]; } imagesArray = [tempPicArray copy]; // copy makes immutable copy of array [tempPicArray release]; currentPicIndex = 0; // Assume first pic is pic already being shown } else imagesArray = nil; // No need for a needless pic array // Remove please wait message [pleaseWaitViewControllerObject.view removeFromSuperview]; } I put in tons of NSLog lines to keep track of what was going on & self.imagesDataSource is populated with the returned array but when the parser object is released self.imagesDataSource becomes empty. I thought self.imagesDataSource = [ImagesParserObject.returnedArray copy]; is supposed to make an independant object, like as if it was alloc, init'ed, so that self.imagesDataSource is not just a pointer to the parser's array but is it's own array. So why does the release of the parser object clear the copy of the array. (I checked & double checked that it's not something overwriting self.imagesDataSource, commenting out [imagesParserObject release] consistently fixes the problem) Also, I have exactly the same problem with self.detailDataSource which is declared & populated in the exact same way as self.imagesDataSource I thought that once I call the parser I could release it because the caller no longer needs to refer to it, all further activity is carried out by the parser object through it's delegate method, what am I doing wrong?

    Read the article

  • Non-linear regression models in PostgreSQL using R

    - by Dave Jarvis
    Background I have climate data (temperature, precipitation, snow depth) for all of Canada between 1900 and 2009. I have written a basic website and the simplest page allows users to choose category and city. They then get back a very simple report (without the parameters and calculations section): The primary purpose of the web application is to provide a simple user interface so that the general public can explore the data in meaningful ways. (A list of numbers is not meaningful to the general public, nor is a website that provides too many inputs.) The secondary purpose of the application is to provide climatologists and other scientists with deeper ways to view the data. (Using too many inputs, of course.) Tool Set The database is PostgreSQL with R (mostly) installed. The reports are written using iReport and generated using JasperReports. Poor Model Choice Currently, a linear regression model is applied against annual averages of daily data. The linear regression model is calculated within a PostgreSQL function as follows: SELECT regr_slope( amount, year_taken ), regr_intercept( amount, year_taken ), corr( amount, year_taken ) FROM temp_regression INTO STRICT slope, intercept, correlation; The results are returned to JasperReports using: SELECT year_taken, amount, year_taken * slope + intercept, slope, intercept, correlation, total_measurements INTO result; JasperReports calls into PostgreSQL using the following parameterized analysis function: SELECT year_taken, amount, measurements, regression_line, slope, intercept, correlation, total_measurements, execute_time FROM climate.analysis( $P{CityId}, $P{Elevation1}, $P{Elevation2}, $P{Radius}, $P{CategoryId}, $P{Year1}, $P{Year2} ) ORDER BY year_taken This is not an optimal solution because it gives the false impression that the climate is changing at a slow, but steady rate. Questions Using functions that take two parameters (e.g., year [X] and amount [Y]), such as PostgreSQL's regr_slope: What is a better regression model to apply? What CPAN-R packages provide such models? (Installable, ideally, using apt-get.) How can the R functions be called within a PostgreSQL function? If no such functions exist: What parameters should I try to obtain for functions that will produce the desired fit? How would you recommend showing the best fit curve? Keep in mind that this is a web app for use by the general public. If the only way to analyse the data is from an R shell, then the purpose has been defeated. (I know this is not the case for most R functions I have looked at so far.) Thank you!

    Read the article

  • How do you handle EF Data Contexts combined with asp.net custom membership/role providers

    - by KallDrexx
    I can't seem to get my head around how to implement a custom membership provider with Entity Framework data contexts into my asp.net MVC application. I understand how to create a custom membership/role provider by itself (using this as a reference). Here's my current setup: As of now I have a repository factory interface that allows different repository factories to be created (right now I only have a factory for EF repositories and and in memory repositories). The repository factory looks like this: public class EFRepositoryFactory : IRepositoryFactory { private EntitiesContainer _entitiesContext; /// <summary> /// Constructor that generates the necessary object contexts /// </summary> public EFRepositoryFactory() { _entitiesContext = new EntitiesContainer(); } /// <summary> /// Generates a new entity framework repository for the specified entity type /// </summary> /// <typeparam name="T">Type of entity to generate a repository for </typeparam> /// <returns>Returns an EFRepository</returns> public IRepository<T> GenerateRepository<T>() where T : class { return new EFRepository<T>(_entitiesContext); } } Controllers are passed an EF repository factory via castle Windsor. The controller then creates all the service/business layer objects it requires and passes in the repository factory into it. This means that all service objects are using the same EF data contexts and I do not have to worry about objects being used in more than one data context (which of course is not allowed and causes an exception). As of right now I am trying to decide how to generate my user and authorization service layers, and have run against a design roadblock. The User/Authization service will be a central class that handles the logic for logging in, changing user details, managing roles and determining what users have access to what. The problem is, using the current methodology the asp.net mvc controllers will initialize it's own EF repository factory via Windsor and the asp.net membership/role provider will have to initialize it's own EF repository factory. This means that each part of the site will then have it's own data context. This seems to mean that if asp.net authenticates a user, that user's object will be in the membership provider's data context and thus if I try to retrieve that user object in the service layer (say to change the user's name) I will get a duplication exception. I thought of making the repository factory class a singleton, but I don't see a way for that to work with castle Windsor. How do other people handle asp.net custom providers in a MVC (or any n-tier) architecture without having object duplication issues?

    Read the article

< Previous Page | 344 345 346 347 348 349 350 351 352 353 354 355  | Next Page >