Search Results

Search found 10527 results on 422 pages for 'ccnet config'.

Page 78/422 | < Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >

  • Xorg does not see my monitor EDID

    - by sean farley
    Below is the output from my Xorg.0. X.Org X Server 1.11.3 Release Date: 2011-12-16 [ 22.311] X Protocol Version 11, Revision 0 [ 22.311] Build Operating System: Linux 2.6.42-23-generic x86_64 Ubuntu [ 22.311] Current Operating System: Linux sean-P55-USB3 3.2.0-34-generic #53-Ubuntu SMP Thu Nov 15 10:48:16 UTC 2012 x86_64 [ 22.311] Kernel command line: BOOT_IMAGE=/boot/vmlinuz-3.2.0-34-generic root=UUID=0a34603e-aee9-44d1-8982-a5a5a38c3e4d ro quiet splash [ 22.311] Build Date: 29 August 2012 12:12:33AM [ 22.311] xorg-server 2:1.11.4-0ubuntu10.8 (For technical support please see http://www.ubuntu.com/support) [ 22.311] Current version of pixman: 0.24.4 [ 22.311] Before reporting problems, check http://wiki.x.org to make sure that you have the latest version. [ 22.311] Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. [ 22.311] (==) Log file: "/var/log/Xorg.0.log", Time: Sat Nov 17 13:20:45 2012 [ 22.311] (==) Using config file: "/etc/X11/xorg.conf" [ 22.311] (==) Using system config directory "/usr/share/X11/xorg.conf.d" [ 22.311] (==) No Layout section. Using the first Screen section. [ 22.311] (==) No screen section available. Using defaults. [ 22.311] (**) |-->Screen "Default Screen Section" (0) [ 22.311] (**) | |-->Monitor "<default monitor>" [ 22.311] (==) No device specified for screen "Default Screen Section". Using the first device section listed. [ 22.311] (**) | |-->Device "Default Device" [ 22.311] (==) No monitor specified for screen "Default Screen Section". Using a default monitor configuration. [ 22.311] (==) Automatically adding devices I have searched all over, and followed lots of dead ends and unanswered questions on this issue. I need to get this monitor recognised so I can use the native resolution of 1600x1200. The Nvidia driver in Windows has no problem with this. The monitor is an old Iiyama HM204DT A. Is there a way of configuring Xorg manually to get these working? I have tried xrandr but this will not work. Output:- sean@sean-P55-USB3:~$ xrandr Screen 0: minimum 8 x 8, current 1152 x 864, maximum 16384 x 16384 DVI-I-0 connected 1152x864+0+0 (normal left inverted right x axis y axis) 0mm x 0mm 1024x768 60.0 + 1360x768 60.0 59.8 1152x864 60.0* 800x600 72.2 60.3 56.2 680x384 119.9 119.6 640x480 59.9 512x384 120.0 400x300 144.4 320x240 120.1 DVI-I-1 disconnected (normal left inverted right x axis y axis) DVI-I-2 disconnected (normal left inverted right x axis y axis) HDMI-0 disconnected (normal left inverted right x axis y axis) DVI-I-3 disconnected (normal left inverted right x axis y axis) Tried Nvidia Xorg.config: sean@sean-P55-USB3:~$ sudo nvidia-xconfig [sudo] password for sean: Using X configuration file: "/etc/X11/xorg.conf". VALIDATION ERROR: Data incomplete in file /etc/X11/xorg.conf. Device section "Default Device" must have a Driver line. Backed up file '/etc/X11/xorg.conf' as '/etc/X11/xorg.conf.nvidia-xconfig-original' Backed up file '/etc/X11/xorg.conf' as '/etc/X11/xorg.conf.backup' New X configuration file written to '/etc/X11/xorg.conf' How do I insert a driver line? This is a bit of a pain as I want to use my Vectorworks cad program in a WinXP Vbox at 1600x1200 but all virtual drives are restricted to the host screen resolution. Do i need to manually create EIDI info in Xorg? I am slightly confused about how Xorg and Nvidia relate Please help

    Read the article

  • No database selected error in CodeIgniter running on MAMP stack

    - by Apophenia Overload
    First off, does anyone know of a good place to get help with CodeIgniter? The official community forums are somewhat disappointing in terms of getting many responses. I have ci installed on a regular MAMP stack, and I’m working on this tutorial. However, I have only gone through the Created section, and currently I am getting a No database selected error. Model: <?php class submit_model extends Model { function submitForm($school, $district) { $data = array( 'school' => $school, 'district' => $district ); $this->db->insert('your_stats', $data); } } View: <?php $this->load->helper('form'); ?> <?php echo form_open('main'); ?> <p> <?php echo form_input('school'); ?> </p> <p> <?php echo form_input('district'); ?> </p> <p> <?php echo form_submit('submit', 'Submit'); ?> </p> <?php echo form_close(); ?> Controller: <?php class Main extends controller { function index() { // Check if form is submitted if ($this->input->post('submit')) { $school = $this->input->xss_clean($this->input->post('school')); $district = $this->input->xss_clean($this->input->post('district')); $this->load->model('submit_model'); // Add the post $this->submit_model->submitForm($school, $district); } $this->load->view('main_view'); } } database.php $db['default']['hostname'] = "localhost:8889"; $db['default']['username'] = "root"; $db['default']['password'] = "root"; $db['default']['database'] = "stats_test"; $db['default']['dbdriver'] = "mysql"; $db['default']['dbprefix'] = ""; $db['default']['pconnect'] = TRUE; $db['default']['db_debug'] = TRUE; $db['default']['cache_on'] = FALSE; $db['default']['cachedir'] = ""; $db['default']['char_set'] = "utf8"; $db['default']['dbcollat'] = "utf8_general_ci"; config.php $config['base_url'] = "http://localhost:8888/ci/"; ... $config['index_page'] = "index.php"; ... $config['uri_protocol'] = "AUTO"; So, how come it’s giving me this error message? A Database Error Occurred Error Number: 1046 No database selected INSERT INTO `your_stats` (`school`, `district`) VALUES ('TJHSST', 'FairFax') Is there any way for me to test if CodeIgniter can actually detect the mySQL databases I've created with phpMyAdmin in my MAMP stack?

    Read the article

  • Why is Apache ignoring VirtualHost directive for first name in hosts file?

    - by Peter Taylor
    Standard pre-emptive disclaimer: host names, IP addresses, and directories are anonymised. Problem We have a server with Apache 2.2 (WAMP) listening on one IP and IIS listening on another. An ASP.Net application running under IIS needs to do some simple GETs from the PHP applications running under Apache to build a unified search results page. This is a virtual server, so the internal IPs are mapped somehow to external ones. The internal DNS system doesn't resolve the publicly published names under which the applications are accessed externally, so the obvious solution was to add them to etc/hosts with the internal IP address: 127.0.0.1 localhost # 10.0.1.17 is the IP address Apache listens on 10.0.1.17 phpappone.example.com 10.0.1.17 phpapptwo.example.com After restarting Apache, phpappone.example.com stopped working. Instead of returning pages from that app, Apache was returning pages from the default site. The other PHP apps worked fine. Relevant configuration httpd.conf, summarised, says: ServerAdmin [email protected] ServerRoot "c:/server/Apache2" ServerName www.example.com Listen 10.0.1.17:80 Listen 10.0.1.17:443 # Not obviously related config options elided # Nothing obviously astandard # If you want more details, post a comment DocumentRoot "c:/server/Apache2/htdocs" <Directory /> Options FollowSymLinks AllowOverride None Order deny,allow Deny from all </Directory> # Fallback for unknown host names <Directory "c:/server/Apache2/htdocs"> Options Indexes FollowSymLinks AllowOverride None Order allow,deny Allow from all </Directory> # PHP apps common config <Directory "C:/Inetpub/wwwroot/phpapps"> Options FollowSymLinks -Indexes +ExecCGI AllowOverride All Order Allow,Deny Allow from All </Directory> # Virtual hosts NameVirtualHost 10.0.1.17:80 NameVirtualHost 10.0.1.17:443 <VirtualHost _default_:80> </VirtualHost> <VirtualHost _default_:443> SSLEngine On SSLCertificateFile "certs/example.crt" SSLCertificateKeyFile "certs/example.key" </VirtualHost> Include conf/vhosts/*.conf and the vhosts files are e.g. <VirtualHost 10.0.1.17:80> ServerName phpappone.example.com DocumentRoot "c:/Inetpub/wwwroot/phpapps/phpappone" </VirtualHost> <VirtualHost 10.0.1.17:443> ServerName phpappone.example.com DocumentRoot "c:/Inetpub/wwwroot/phpapps/phpappone" SSLEngine On SSLCertificateFile "certs/example.crt" SSLCertificateKeyFile "certs/example.key" </VirtualHost> Buggy behaviour or our misunderstanding? The documentation for name-based virtual hosts says that Now when a request arrives, the server will first check if it is using an IP address that matches the NameVirtualHost. If it is, then it will look at each <VirtualHost> section with a matching IP address and try to find one where the ServerName or ServerAlias matches the requested hostname. If it finds one, then it uses the configuration for that server. If no matching virtual host is found, then the first listed virtual host that matches the IP address will be used. Yet that isn't what we observe. It seems that if the hostname is the first hostname listed against the IP address in etc/hosts then it uses the configuration from the main server and skips the virtual host lookup. Workarounds The workaround we've put in place for the time being is to add a fake line to the hosts file: 127.0.0.1 localhost # 10.0.1.17 is the IP address Apache listens on 10.0.1.17 fakename.example.com 10.0.1.17 phpappone.example.com 10.0.1.17 phpapptwo.example.com This fixes the problem, but it's not very elegant. In addition, it seems a bit brittle: reordering lines in the hosts file (or deleting the nonsense value) can break it. The other obvious workaround is to make the main server configuration match that of the troublesome virtual host, but that is equally brittle. A third option, which is just ugly, would be to change the ASP.Net code to take separate config items for the IP address and the hostname and to implement HTTP manually. Ugh. The question Is there a good solution to this problem which localises any "Do not touch this!" explanations to the Apache config files?

    Read the article

  • Asp.net Session State Revisited

    - by karan@dotnet
    Every now and then I see doubts and queries which I believe is the most discussed topic in the .net environment - Asp.net Sessions. So what really are they, why are they needed and what does browser and .net do with it. These and some of the other questions I hope to answer with this post. Because of the stateless nature of the HTTP protocol there is always a need of state management in a web application. There are many other ways to store data but I feel Session state is amongst the most powerful one. The ASP.NET session state is a technology that lets you store server-side, user-specific data. Our web applications can then use data to process request from the user for which the session state was instantiated. So when does a session is first created? When we start a asp.net application a non-expiring cookie is created and its called as ASP.NET_SessionId. Basically there are two methods for this depending upon how you configure this setting in your config file. The session ID can be a part of cookie as discussed above(called as ASP.NET_SessionId) or it is embedded in the browser’s URL. For the latter part we have to set cookie-less session in our web.config file. These Session ID’s are 120-bit random number that is represented by 20-character string. The cookie will be alive until you close your browser. If you browse from one app to another within the same domain, then both the apps will use the same session ID to track the session state. Why reuse? so that you don’t have to create a new session ID for each request. One can abandon one particular Session by calling Session.Abandon() which will stop the page processing and clear out the session data. A subsequent page request causes a brand new session object to be instantiated. So what happened to my cookie? Well the session cookie is still there even when one Session.Abandon() is called and another session object is created. The Session.Abandon() lets you clear out your session state without waiting for session timeout. By default, this time-out is a 20-minute sliding expiration. This expiration is refreshed every time that the user makes a request to the Web site and presents the session ID cookie. The Abandon method sets a flag in the session state object that indicates that the session state should be abandoned. If your app does not have global.asax then your session cookie will be killed at the end of each page request. So you need to have a global.asax file and Session_Start() handler to make sure that the session cookie will remain intact once its issued after the first page hit. The runtime invokes global.asax’s Session_OnEnd() when you call Session.Abandon() or the session times out. The session manager stores session data in HttpCache with sliding expiration where this timeout can be configured in the <sessionState> of web.config file. When the timeout is up the HttpCache will remove the session state object. Sometimes we want particular pages not to time out as compared to other pages in our applications. We can handle this in two ways. First, we can set a timer or may be a JavaScript function that refreshes the page after fixed intervals of time. The only thing being the page being cached locally and then the request is not made to the server so to prevent that you can add this to your page: <%@ OutputCache Location="None" VaryByParam="None" %> Second approach is to move your page into its own folder and then add a web.config to that folder to control the timeout. Also not all pages in your application will need access to session state. For those pages that do not, you can indicate that session state is not needed and prevent session data from being fetched from the store in requests to these pages. You can disable the session state at page level like this:<%@ Page EnableSessionState="False" %>tbc…

    Read the article

  • 30 Steps to Master ASP.NET MVC Application development

    - by Rajesh Pillai
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 st1\:*{behavior:url(#ieooui) } /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} Welcome Readers!,   I am starting out a new series on ASP.NET  MVC skill building which will be posted over the next couple of weeks.  Let me know your thoughts on the content, which I have planned and a couple of them has been taken from ASP.NET MVC2 Cookbook. (NOTE: Only the heading has been taken, the content will be not :)).   Do let me know what you would like to see, or any additional inputs or ideas to cover in this topics.  The 30 steps are oultined below for quick reference.  Will start filling this out quickly.   Outlined is the ‘30’ step to master ASP.NET MVC.   A Peek Into Model What is a model? Different types of model Presentation/ViewModel Model Mapping (AutoMapper)   A Peak into View How view works in ASP.NET MVC? View Engine Design Custom View Engine View Best Practices Templated Helpers Partial Views   A Peak into Controller Introduction Controller Design Controller Best Practices Asynchronous Controller Custom Action Result Action Filters Controller Factory to use with IOC   Routes Explanation Routes from the database Routes from XML More complex routing   Master Pages Basics Setting Master Page Dynamically   Working with data in the view Repeating Views Array of check boxes Array of radio buttons Paged data CRUD Client side action Confirmation Dialog (modal window) jqGrid   Working with Forms   Validation Model Validation with DataAnnotations Using the xVal validation framework Client side validation with jQuery Validation Fluent Validation Model Binders   Templating Create strongly typed helper using T4 Custom View Templates with T4 Create custom MVC project template using T4   IOC AutoFac Ninject Unity Application   Areas   jQuery, Ajax and jQuery Plugins   State Maintenance Application State User state Cookies Webfarm   Error Handling View error handling Controller error handling ELMAH (Error Logging Modules and Handlers)   Authentication and Authorization User Registration form SignOn Process Password Reminder Membership and Roles Windows authentication Restricting access to all pages Restricting access to selected pages Restricting access to pages by role Restricting access to a controller Restricting access to selected area   Profiles and Themes Using Profiles Inheriting a Profile Migrating an anonymous profile Creating custom themes Using themes User personalized themes   Configuration Adding custom application settings in web.config Displaying custom error messages Accessing other web.config configuration elements Adding custom configuration elements to web.config Encrypting web.config sections   Tracing, Debugging and Logging   Caching Caching a whole page Caching pages based on route details Caching pages based on browser type and version Caching pages based custom strings Caching partial pages Caching application data Object Caching Using Microsoft Velocity Using MemCache Using AppFabric cache   Localization   HTTP Handlers and Modules   Security XSS/CSRF AnitForgery Encoding   HtmlHelpers Strongly typed helpers Writing custom helpers   Repository Pattern (Data access)   WF/WCF   Unit Testing   Mocking Framework   Integration Testing   Load / Performance Testing   Deployment    Once again let me know your thoughts on this.   Till then, Enjoy MVC'ing!!!

    Read the article

  • Extending Oracle CEP with Predictive Analytics

    - by vikram.shukla(at)oracle.com
    Introduction: OCEP is often used as a business rules engine to execute a set of business logic rules via CQL statements, and take decisions based on the outcome of those rules. There are times where configuring rules manually is sufficient because an application needs to deal with only a small and well-defined set of static rules. However, in many situations customers don't want to pre-define such rules for two reasons. First, they are dealing with events with lots of columns and manually crafting such rules for each column or a set of columns and combinations thereof is almost impossible. Second, they are content with probabilistic outcomes and do not care about 100% precision. The former is the case when a user is dealing with data with high dimensionality, the latter when an application can live with "false" positives as they can be discarded after further inspection, say by a Human Task component in a Business Process Management software. The primary goal of this blog post is to show how this can be achieved by combining OCEP with Oracle Data Mining® and leveraging the latter's rich set of algorithms and functionality to do predictive analytics in real time on streaming events. The secondary goal of this post is also to show how OCEP can be extended to invoke any arbitrary external computation in an RDBMS from within CEP. The extensible facility is known as the JDBC cartridge. The rest of the post describes the steps required to achieve this: We use the dataset available at http://blogs.oracle.com/datamining/2010/01/fraud_and_anomaly_detection_made_simple.html to showcase the capabilities. We use it to show how transaction anomalies or fraud can be detected. Building the model: Follow the self-explanatory steps described at the above URL to build the model.  It is very simple - it uses built-in Oracle Data Mining PL/SQL packages to cleanse, normalize and build the model out of the dataset.  You can also use graphical Oracle Data Miner®  to build the models. To summarize, it involves: Specifying which algorithms to use. In this case we use Support Vector Machines as we're trying to find anomalies in highly dimensional dataset.Build model on the data in the table for the algorithms specified. For this example, the table was populated in the scott/tiger schema with appropriate privileges. Configuring the Data Source: This is the first step in building CEP application using such an integration.  Our datasource looks as follows in the server config file.  It is advisable that you use the Visualizer to add it to the running server dynamically, rather than manually edit the file.    <data-source>         <name>DataMining</name>         <data-source-params>             <jndi-names>                 <element>DataMining</element>             </jndi-names>             <global-transactions-protocol>OnePhaseCommit</global-transactions-protocol>         </data-source-params>         <connection-pool-params>             <credential-mapping-enabled></credential-mapping-enabled>             <test-table-name>SQL SELECT 1 from DUAL</test-table-name>             <initial-capacity>1</initial-capacity>             <max-capacity>15</max-capacity>             <capacity-increment>1</capacity-increment>         </connection-pool-params>         <driver-params>             <use-xa-data-source-interface>true</use-xa-data-source-interface>             <driver-name>oracle.jdbc.OracleDriver</driver-name>             <url>jdbc:oracle:thin:@localhost:1522:orcl</url>             <properties>                 <element>                     <value>scott</value>                     <name>user</name>                 </element>                 <element>                     <value>{Salted-3DES}AzFE5dDbO2g=</value>                     <name>password</name>                 </element>                                 <element>                     <name>com.bea.core.datasource.serviceName</name>                     <value>oracle11.2g</value>                 </element>                 <element>                     <name>com.bea.core.datasource.serviceVersion</name>                     <value>11.2.0</value>                 </element>                 <element>                     <name>com.bea.core.datasource.serviceObjectClass</name>                     <value>java.sql.Driver</value>                 </element>             </properties>         </driver-params>     </data-source>   Designing the EPN: The EPN is very simple in this example. We briefly describe each of the components. The adapter ("DataMiningAdapter") reads data from a .csv file and sends it to the CQL processor downstream. The event payload here is same as that of the table in the database (refer to the attached project or do a "desc table-name" from a SQL*PLUS prompt). While this is for convenience in this example, it need not be the case. One can still omit fields in the streaming events, and need not match all columns in the table on which the model was built. Better yet, it does not even need to have the same name as columns in the table, as long as you alias them in the USING clause of the mining function. (Caveat: they still need to draw values from a similar universe or domain, otherwise it constitutes incorrect usage of the model). There are two things in the CQL processor ("DataMiningProc") that make scoring possible on streaming events. 1.      User defined cartridge function Please refer to the OCEP CQL reference manual to find more details about how to define such functions. We include the function below in its entirety for illustration. <?xml version="1.0" encoding="UTF-8"?> <jdbcctxconfig:config     xmlns:jdbcctxconfig="http://www.bea.com/ns/wlevs/config/application"     xmlns:jc="http://www.oracle.com/ns/ocep/config/jdbc">        <jc:jdbc-ctx>         <name>Oracle11gR2</name>         <data-source>DataMining</data-source>               <function name="prediction2">                                 <param name="CQLMONTH" type="char"/>                      <param name="WEEKOFMONTH" type="int"/>                      <param name="DAYOFWEEK" type="char" />                      <param name="MAKE" type="char" />                      <param name="ACCIDENTAREA"   type="char" />                      <param name="DAYOFWEEKCLAIMED"  type="char" />                      <param name="MONTHCLAIMED" type="char" />                      <param name="WEEKOFMONTHCLAIMED" type="int" />                      <param name="SEX" type="char" />                      <param name="MARITALSTATUS"   type="char" />                      <param name="AGE" type="int" />                      <param name="FAULT" type="char" />                      <param name="POLICYTYPE"   type="char" />                      <param name="VEHICLECATEGORY"  type="char" />                      <param name="VEHICLEPRICE" type="char" />                      <param name="FRAUDFOUND" type="int" />                      <param name="POLICYNUMBER" type="int" />                      <param name="REPNUMBER" type="int" />                      <param name="DEDUCTIBLE"   type="int" />                      <param name="DRIVERRATING"  type="int" />                      <param name="DAYSPOLICYACCIDENT"   type="char" />                      <param name="DAYSPOLICYCLAIM" type="char" />                      <param name="PASTNUMOFCLAIMS" type="char" />                      <param name="AGEOFVEHICLES" type="char" />                      <param name="AGEOFPOLICYHOLDER" type="char" />                      <param name="POLICEREPORTFILED" type="char" />                      <param name="WITNESSPRESNT" type="char" />                      <param name="AGENTTYPE" type="char" />                      <param name="NUMOFSUPP" type="char" />                      <param name="ADDRCHGCLAIM"   type="char" />                      <param name="NUMOFCARS" type="char" />                      <param name="CQLYEAR" type="int" />                      <param name="BASEPOLICY" type="char" />                                     <return-component-type>char</return-component-type>                                                      <sql><![CDATA[             SELECT to_char(PREDICTION_PROBABILITY(CLAIMSMODEL, '0' USING *))               AS probability             FROM (SELECT  :CQLMONTH AS MONTH,                                            :WEEKOFMONTH AS WEEKOFMONTH,                          :DAYOFWEEK AS DAYOFWEEK,                           :MAKE AS MAKE,                           :ACCIDENTAREA AS ACCIDENTAREA,                           :DAYOFWEEKCLAIMED AS DAYOFWEEKCLAIMED,                           :MONTHCLAIMED AS MONTHCLAIMED,                           :WEEKOFMONTHCLAIMED,                             :SEX AS SEX,                           :MARITALSTATUS AS MARITALSTATUS,                            :AGE AS AGE,                           :FAULT AS FAULT,                           :POLICYTYPE AS POLICYTYPE,                            :VEHICLECATEGORY AS VEHICLECATEGORY,                           :VEHICLEPRICE AS VEHICLEPRICE,                           :FRAUDFOUND AS FRAUDFOUND,                           :POLICYNUMBER AS POLICYNUMBER,                           :REPNUMBER AS REPNUMBER,                           :DEDUCTIBLE AS DEDUCTIBLE,                            :DRIVERRATING AS DRIVERRATING,                           :DAYSPOLICYACCIDENT AS DAYSPOLICYACCIDENT,                            :DAYSPOLICYCLAIM AS DAYSPOLICYCLAIM,                           :PASTNUMOFCLAIMS AS PASTNUMOFCLAIMS,                           :AGEOFVEHICLES AS AGEOFVEHICLES,                           :AGEOFPOLICYHOLDER AS AGEOFPOLICYHOLDER,                           :POLICEREPORTFILED AS POLICEREPORTFILED,                           :WITNESSPRESNT AS WITNESSPRESENT,                           :AGENTTYPE AS AGENTTYPE,                           :NUMOFSUPP AS NUMOFSUPP,                           :ADDRCHGCLAIM AS ADDRCHGCLAIM,                            :NUMOFCARS AS NUMOFCARS,                           :CQLYEAR AS YEAR,                           :BASEPOLICY AS BASEPOLICY                 FROM dual)                 ]]>         </sql>        </function>     </jc:jdbc-ctx> </jdbcctxconfig:config> 2.      Invoking the function for each event. Once this function is defined, you can invoke it from CQL as follows: <?xml version="1.0" encoding="UTF-8"?> <wlevs:config xmlns:wlevs="http://www.bea.com/ns/wlevs/config/application">   <processor>     <name>DataMiningProc</name>     <rules>        <query id="q1"><![CDATA[                     ISTREAM(SELECT S.CQLMONTH,                                   S.WEEKOFMONTH,                                   S.DAYOFWEEK, S.MAKE,                                   :                                         S.BASEPOLICY,                                    C.F AS probability                                                 FROM                                 StreamDataChannel [NOW] AS S,                                 TABLE(prediction2@Oracle11gR2(S.CQLMONTH,                                      S.WEEKOFMONTH,                                      S.DAYOFWEEK,                                       S.MAKE, ...,                                      S.BASEPOLICY) AS F of char) AS C)                       ]]></query>                 </rules>               </processor>           </wlevs:config>   Finally, the last stage in the EPN prints out the probability of the event being an anomaly. One can also define a threshold in CQL to filter out events that are normal, i.e., below a certain mark as defined by the analyst or designer. Sample Runs: Now let's see how this behaves when events are streamed through CEP. We use only two events for brevity, one normal and other one not. This is one of the "normal" looking events and the probability of it being anomalous is less than 60%. Event is: eventType=DataMiningOutEvent object=q1  time=2904821976256 S.CQLMONTH=Dec, S.WEEKOFMONTH=5, S.DAYOFWEEK=Wednesday, S.MAKE=Honda, S.ACCIDENTAREA=Urban, S.DAYOFWEEKCLAIMED=Tuesday, S.MONTHCLAIMED=Jan, S.WEEKOFMONTHCLAIMED=1, S.SEX=Female, S.MARITALSTATUS=Single, S.AGE=21, S.FAULT=Policy Holder, S.POLICYTYPE=Sport - Liability, S.VEHICLECATEGORY=Sport, S.VEHICLEPRICE=more than 69000, S.FRAUDFOUND=0, S.POLICYNUMBER=1, S.REPNUMBER=12, S.DEDUCTIBLE=300, S.DRIVERRATING=1, S.DAYSPOLICYACCIDENT=more than 30, S.DAYSPOLICYCLAIM=more than 30, S.PASTNUMOFCLAIMS=none, S.AGEOFVEHICLES=3 years, S.AGEOFPOLICYHOLDER=26 to 30, S.POLICEREPORTFILED=No, S.WITNESSPRESENT=No, S.AGENTTYPE=External, S.NUMOFSUPP=none, S.ADDRCHGCLAIM=1 year, S.NUMOFCARS=3 to 4, S.CQLYEAR=1994, S.BASEPOLICY=Liability, probability=.58931702982118561 isTotalOrderGuarantee=true\nAnamoly probability: .58931702982118561 However, the following event is scored as an anomaly with a very high probability of  89%. So there is likely to be something wrong with it. A close look reveals that the value of "deductible" field (10000) is not "normal". What exactly constitutes normal here?. If you run the query on the database to find ALL distinct values for the "deductible" field, it returns the following set: {300, 400, 500, 700} Event is: eventType=DataMiningOutEvent object=q1  time=2598483773496 S.CQLMONTH=Dec, S.WEEKOFMONTH=5, S.DAYOFWEEK=Wednesday, S.MAKE=Honda, S.ACCIDENTAREA=Urban, S.DAYOFWEEKCLAIMED=Tuesday, S.MONTHCLAIMED=Jan, S.WEEKOFMONTHCLAIMED=1, S.SEX=Female, S.MARITALSTATUS=Single, S.AGE=21, S.FAULT=Policy Holder, S.POLICYTYPE=Sport - Liability, S.VEHICLECATEGORY=Sport, S.VEHICLEPRICE=more than 69000, S.FRAUDFOUND=0, S.POLICYNUMBER=1, S.REPNUMBER=12, S.DEDUCTIBLE=10000, S.DRIVERRATING=1, S.DAYSPOLICYACCIDENT=more than 30, S.DAYSPOLICYCLAIM=more than 30, S.PASTNUMOFCLAIMS=none, S.AGEOFVEHICLES=3 years, S.AGEOFPOLICYHOLDER=26 to 30, S.POLICEREPORTFILED=No, S.WITNESSPRESENT=No, S.AGENTTYPE=External, S.NUMOFSUPP=none, S.ADDRCHGCLAIM=1 year, S.NUMOFCARS=3 to 4, S.CQLYEAR=1994, S.BASEPOLICY=Liability, probability=.89171554529576691 isTotalOrderGuarantee=true\nAnamoly probability: .89171554529576691 Conclusion: By way of this example, we show: real-time scoring of events as they flow through CEP leveraging Oracle Data Mining.how CEP applications can invoke complex arbitrary external computations (function shipping) in an RDBMS.

    Read the article

  • SSAS: Utility to export SQL code from your cube's Data Source View (DSV)

    - by DrJohn
    When you are working on a cube, particularly in a multi-person team, it is sometimes necessary to review what changes that have been done to the SQL queries in the cube's data source view (DSV). This can be a problem as the SQL editor in the DSV is not the best interface to review code. Now of course you can cut and paste the SQL into SSMS, but you have to do each query one-by-one. What is worse your DBA is unlikely to have BIDS installed, so you will have to manually export all the SQL yourself and send him the files. To make it easy to get hold of the SQL in a Data Source View, I developed a C# utility which connects to an OLAP database and uses Analysis Services Management Objects (AMO) to obtain and export all the SQL to a series of files. The added benefit of this approach is that these SQL files can be placed under source code control which means the DBA can easily compare one version with another. The Trick When I came to implement this utility, I quickly found that the AMO API does not give direct access to anything useful about the tables in the data source view. Iterating through the DSVs and tables is easy, but getting to the SQL proved to be much harder. My Google searches returned little of value, so I took a look at the idea of using the XmlDom to open the DSV’s XML and obtaining the SQL from that. This is when the breakthrough happened. Inspecting the DSV’s XML I saw the things I was interested in were called TableType DbTableName FriendlyName QueryDefinition Searching Google for FriendlyName returned this page: Programming AMO Fundamental Objects which hinted at the fact that I could use something called ExtendedProperties to obtain these XML attributes. This simplified my code tremendously to make the implementation almost trivial. So here is my code with appropriate comments. The full solution can be downloaded from here: ExportCubeDsvSQL.zip   using System;using System.Data;using System.IO;using Microsoft.AnalysisServices; ... class code removed for clarity// connect to the OLAP server Server olapServer = new Server();olapServer.Connect(config.olapServerName);if (olapServer != null){ // connected to server ok, so obtain reference to the OLAP databaseDatabase olapDatabase = olapServer.Databases.FindByName(config.olapDatabaseName);if (olapDatabase != null){ Console.WriteLine(string.Format("Succesfully connected to '{0}' on '{1}'",   config.olapDatabaseName,   config.olapServerName));// export SQL from each data source view (usually only one, but can be many!)foreach (DataSourceView dsv in olapDatabase.DataSourceViews){ Console.WriteLine(string.Format("Exporting SQL from DSV '{0}'", dsv.Name));// for each table in the DSV, export the SQL in a fileforeach (DataTable dt in dsv.Schema.Tables){ Console.WriteLine(string.Format("Exporting SQL from table '{0}'", dt.TableName)); // get name of the table in the DSV// use the FriendlyName as the user inputs this and therefore has control of itstring queryName = dt.ExtendedProperties["FriendlyName"].ToString().Replace(" ", "_");string sqlFilePath = Path.Combine(targetDir.FullName, queryName + ".sql"); // delete the sql file if it exists... file deletion code removed for clarity// write out the SQL to a fileif (dt.ExtendedProperties["TableType"].ToString() == "View"){ File.WriteAllText(sqlFilePath, dt.ExtendedProperties["QueryDefinition"].ToString());}if (dt.ExtendedProperties["TableType"].ToString() == "Table"){ File.WriteAllText(sqlFilePath, dt.ExtendedProperties["DbTableName"].ToString()); } } } Console.WriteLine(string.Format("Successfully written out SQL scripts to '{0}'", targetDir.FullName)); } }   Of course, if you are following industry best practice, you should be basing your cube on a series of views. This will mean that this utility will be of limited practical value unless of course you are inheriting a project and want to check if someone did the implementation correctly.

    Read the article

  • Best way to load application settings

    - by enzom83
    A simple way to keep the settings of a Java application is represented by a text file with ".properties" extension containing the identifier of each setting associated with a specific value (this value may be a number, string, date, etc..). C# uses a similar approach, but the text file must be named "App.config". In both cases, in source code you must initialize a specific class for reading settings: this class has a method that returns the value (as string) associated with the specified setting identifier. // Java example Properties config = new Properties(); config.load(...); String valueStr = config.getProperty("listening-port"); // ... // C# example NameValueCollection setting = ConfigurationManager.AppSettings; string valueStr = setting["listening-port"]; // ... In both cases we should parse strings loaded from the configuration file and assign the ??converted values to the related typed objects (parsing errors could occur during this phase). After the parsing step, we must check that the setting values ??belong to a specific domain of validity: for example, the maximum size of a queue should be a positive value, some values ??may be related (example: min < max), and so on. Suppose that the application should load the settings as soon as it starts: in other words, the first operation performed by the application is to load the settings. Any invalid values for the settings ??must be replaced automatically with default values??: if this happens to a group of related settings, those settings are all set with default values. The easiest way to perform these operations is to create a method that first parses all the settings, then checks the loaded values ??and finally sets any default values??. However maintenance is difficult if you use this approach: as the number of settings increases while developing the application, it becomes increasingly difficult to update the code. In order to solve this problem, I had thought of using the Template Method pattern, as follows. public abstract class Setting { protected abstract bool TryParseValues(); protected abstract bool CheckValues(); public abstract void SetDefaultValues(); /// <summary> /// Template Method /// </summary> public bool TrySetValuesOrDefault() { if (!TryParseValues() || !CheckValues()) { // parsing error or domain error SetDefaultValues(); return false; } return true; } } public class RangeSetting : Setting { private string minStr, maxStr; private byte min, max; public RangeSetting(string minStr, maxStr) { this.minStr = minStr; this.maxStr = maxStr; } protected override bool TryParseValues() { return (byte.TryParse(minStr, out min) && byte.TryParse(maxStr, out max)); } protected override bool CheckValues() { return (0 < min && min < max); } public override void SetDefaultValues() { min = 5; max = 10; } } The problem is that in this way we need to create a new class for each setting, even for a single value. Are there other solutions to this kind of problem? In summary: Easy maintenance: for example, the addition of one or more parameters. Extensibility: a first version of the application could read a single configuration file, but later versions may give the possibility of a multi-user setup (admin sets up a basic configuration, users can set only certain settings, etc..). Object oriented design.

    Read the article

  • EXT-js PropertyGrid best practices to achieve an update ?

    - by Tom
    Hello, I am using EXT-js for a project, usually everything is pretty straight forward with EXT-js, but with the propertyGrid, I am not sure. I'd like some advice about this piece of code. First the store to populate the property grid, on the load event: var configStore = new Ext.data.JsonStore({ // store config autoLoad:true, url: url.remote, baseParams : {xaction : 'read'}, storeId: 'configStore', // reader config idProperty: 'id_config', root: 'config', totalProperty: 'totalcount', fields: [{ name: 'id_config' }, { name: 'email_admin' } , { name: 'default_from_addr' } , { name: 'default_from_name' } , { name: 'default_smtp' } ],listeners: { load: { fn: function(store, records, options){ // get the property grid component var propGrid = Ext.getCmp('propGrid'); // make sure the property grid exists if (propGrid) { // populate the property grid with store data propGrid.setSource(store.getAt(0).data); } } } } }); here is the propertyGrid: var propsGrid = new Ext.grid.PropertyGrid({ renderTo: 'prop-grid', id: 'propGrid', width: 462, autoHeight: true, propertyNames: { tested: 'QA', borderWidth: 'Border Width' }, viewConfig : { forceFit: true, scrollOffset: 2 // the grid will never have scrollbars } }); So far so good, but with the next button, I'll trigger an old school update, and my question : Is that the proper way to update this component ? Or is it better to user an editor ? or something else... for regular grid I use the store methods to do the update, delete,etc... The examples are really scarce on this one! Even in books about ext-js! new Ext.Button({ renderTo: 'button-container', text: 'Update', handler: function(){ var grid = Ext.getCmp("propGrid"); var source = grid.getSource(); var jsonDataStr = null; jsonDataStr = Ext.encode(source); var requestCg = { url : url.update, method : 'post', params : { config : jsonDataStr , xaction : 'update' }, timeout : 120000, callback : function(options, success, response) { alert(success + "\t" + response); } }; Ext.Ajax.request(requestCg); } }); and thanks for reading.

    Read the article

  • Enabling Session State in SharePoint 2010?

    - by Steve Danner
    I have a web service built for SharePoint 2007 that I am trying to port to SharePoint 2010. This web service is dependent on session state to function properly, but so far, I have been enable to get session state to work at all in SharePoint 2010. This web service runs as its own web application under t he /_vti_bin virtual directory. I have tried all of the following with no luck: Ensured the "State Service" service application is running. Added the System.Web.SessionState.SessionStateModule http module to my application's web.config file. Added the System.Web.SessionState.SessionStateModule http module to my SharePoint root web.config file. Added <pages enableSessionState="true" /> to my application's web.config file. Added <pages enableSessionState="true" /> to my root web.config file. Additional Environment info: Visual Studio 2008 - SP1 .NET 3.5 - SP1 SharePoint 2010 - RC Windows Server 2008 R2 ASMX web service (not WCF) Had anyone had any luck getting a web application or web service to use session state in SharePoint 2010 yet? Thanks! Steve

    Read the article

  • How do I properly host a WCF Data Service in IIS? Why am I getting errors?

    - by j0rd4n
    I'm playing around with WCF Data Services (ADO.NET Data Services). I have an entity framework model pointed at the AdventureWorks database. When I debug my svc file from within Visual Studio, it works great. I can say /awservice.svc/Customers and get back the ATOM feed I expect. If I publish the service (hosted in an ASP.NET web application) to IIS7, the same query string returns a 500 fault. The root svc page itself works as expected and successfully returns ATOM. The /Customers path fails. Here is what my grants look like in the svc file: public class AWService : DataService<AWEntities> { public static void InitializeService( DataServiceConfiguration config ) { config.SetEntitySetAccessRule( "*", EntitySetRights.All ); config.SetServiceOperationAccessRule( "*", ServiceOperationRights.All ); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; } } Update: I enabled verbose errors and get the following in the XML message: <innererror> <message>The underlying provider failed on Open.</message> <type>System.Data.EntityException</type> <stacktrace> at System.Data.EntityClient.EntityConnection.OpenStoreConnectionIf( ... ... <internalexception> <message> Login failed for user 'IIS APPPOOL\DefaultAppPool'. </message> <type>System.Data.SqlClient.SqlException</type> <stacktrace> at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, ...

    Read the article

  • Sharing an assembly between ASP.NET and Silverlight

    - by vtortola
    Hi, I've created an assembly to share it between my main app and the silverlight app. At the beginning it looked like it was going to work but now I get this exception: "System.IO.FileNotFoundException was caught, Message="Could not load file or assembly 'System.Xml.Linq". I'm using .NET 3.5 Sp1 and Silverlight 3. That shared assembly uses System.Xml.Linq, and it cannot find it... I think because it is trying to find that version in the .NET framework instead looking in the silverlight one. How can I fix this? Cheers. PS: this is the full exception output: System.IO.FileNotFoundException was caught Message="Could not load file or assembly 'System.Xml.Linq, Version=2.0.5.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies. The system cannot find the file specified." Source="MyApp.Metadata" FileName="System.Xml.Linq, Version=2.0.5.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" FusionLog="=== Pre-bind state information ===\r\nLOG: User = IIS APPPOOL\DefaultAppPool\r\nLOG: DisplayName = System.Xml.Linq, Version=2.0.5.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35\n (Fully-specified)\r\nLOG: Appbase = file:///C:/Users/vtortola.MyApp/Documents/MyApp/MyAppSAS/WebApplication1/WebApplication1/\r\nLOG: Initial PrivatePath = C:\Users\vtortola.MyApp\Documents\MyApp\MyAppSAS\WebApplication1\WebApplication1\bin\r\nCalling assembly : MyApp.Metadata, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null.\r\n===\r\nLOG: This bind starts in default load context.\r\nLOG: Using application configuration file: C:\Users\vtortola.MyApp\Documents\MyApp\MyAppSAS\WebApplication1\WebApplication1\web.config\r\nLOG: Using host configuration file: C:\Windows\Microsoft.NET\Framework64\v2.0.50727\Aspnet.config\r\nLOG: Using machine configuration file from C:\Windows\Microsoft.NET\Framework64\v2.0.50727\config\machine.config.\r\nLOG: Post-policy reference: System.Xml.Linq, Version=2.0.5.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35\r\nLOG: The same bind was seen before, and was failed with hr = 0x80070002.\r\n" StackTrace: at MyApp.Metadata.MyAppEntity.Deserialize(String message)

    Read the article

  • Multiple database with Spring+Hibernate+JPA

    - by ziftech
    Hi everybody! I'm trying to configure Spring+Hibernate+JPA for work with two databases (MySQL and MSSQL) my datasource-context.xml: <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:aop="http://www.springframework.org/schema/aop" xsi:schemaLocation="http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop-2.5.xsd http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.5.xsd http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx-2.5.xsd http://www.springframework.org/schema/util http://www.springframework.org/schema/util/spring-util-2.5.xsd" xmlns:p="http://www.springframework.org/schema/p" xmlns:tx="http://www.springframework.org/schema/tx" xmlns:util="http://www.springframework.org/schema/util"> <!-- Data Source config --> <bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close" p:driverClassName="${local.jdbc.driver}" p:url="${local.jdbc.url}" p:username="${local.jdbc.username}" p:password="${local.jdbc.password}"> </bean> <bean id="dataSourceRemote" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close" p:driverClassName="${remote.jdbc.driver}" p:url="${remote.jdbc.url}" p:username="${remote.jdbc.username}" p:password="${remote.jdbc.password}" /> <bean id="transactionManager" class="org.springframework.orm.jpa.JpaTransactionManager" p:entity-manager-factory-ref="entityManagerFactory" /> <!-- JPA config --> <tx:annotation-driven transaction-manager="transactionManager" /> <bean id="persistenceUnitManager" class="org.springframework.orm.jpa.persistenceunit.DefaultPersistenceUnitManager"> <property name="persistenceXmlLocations"> <list value-type="java.lang.String"> <value>classpath*:config/persistence.local.xml</value> <value>classpath*:config/persistence.remote.xml</value> </list> </property> <property name="dataSources"> <map> <entry key="localDataSource" value-ref="dataSource" /> <entry key="remoteDataSource" value-ref="dataSourceRemote" /> </map> </property> <property name="defaultDataSource" ref="dataSource" /> </bean> <bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"> <property name="jpaVendorAdapter"> <bean class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter" p:showSql="true" p:generateDdl="true"> </bean> </property> <property name="persistenceUnitManager" ref="persistenceUnitManager" /> <property name="persistenceUnitName" value="localjpa"/> </bean> <bean class="org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor" /> </beans> each persistence.xml contains one unit, like this: <persistence-unit name="remote" transaction-type="RESOURCE_LOCAL"> <properties> <property name="hibernate.ejb.naming_strategy" value="org.hibernate.cfg.DefaultNamingStrategy" /> <property name="hibernate.dialect" value="${remote.hibernate.dialect}" /> <property name="hibernate.hbm2ddl.auto" value="${remote.hibernate.hbm2ddl.auto}" /> </properties> </persistence-unit> PersistenceUnitManager cause following exception: Cannot resolve reference to bean 'persistenceUnitManager' while setting bean property 'persistenceUnitManager'; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'persistenceUnitManager' defined in class path resource [config/datasource-context.xml]: Initialization of bean failed; nested exception is org.springframework.beans.TypeMismatchException: Failed to convert property value of type [java.util.ArrayList] to required type [java.lang.String] for property 'persistenceXmlLocation'; nested exception is java.lang.IllegalArgumentException: Cannot convert value of type [java.util.ArrayList] to required type [java.lang.String] for property 'persistenceXmlLocation': no matching editors or conversion strategy found If left only one persistence.xml without list, every works fine but I need 2 units... I also try to find alternative solution for work with two databases in Spring+Hibernate context, so I would appreciate any solution new error after changing to persistenceXmlLocations No single default persistence unit defined in {classpath:config/persistence.local.xml, classpath:config/persistence.remote.xml} UPDATE: I add persistenceUnitName, it works, but only with one unit, still need help UPDATE: thanks, ChssPly76 I changed config files: datasource-context.xml <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:aop="http://www.springframework.org/schema/aop" xsi:schemaLocation="http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop-2.5.xsd http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.5.xsd http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx-2.5.xsd http://www.springframework.org/schema/util http://www.springframework.org/schema/util/spring-util-2.5.xsd" xmlns:p="http://www.springframework.org/schema/p" xmlns:tx="http://www.springframework.org/schema/tx" xmlns:util="http://www.springframework.org/schema/util"> <bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close" p:driverClassName="${local.jdbc.driver}" p:url="${local.jdbc.url}" p:username="${local.jdbc.username}" p:password="${local.jdbc.password}"> </bean> <bean id="dataSourceRemote" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close" p:driverClassName="${remote.jdbc.driver}" p:url="${remote.jdbc.url}" p:username="${remote.jdbc.username}" p:password="${remote.jdbc.password}"> </bean> <bean class="org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor"> <property name="defaultPersistenceUnitName" value="pu1" /> </bean> <bean id="persistenceUnitManager" class="org.springframework.orm.jpa.persistenceunit.DefaultPersistenceUnitManager"> <property name="persistenceXmlLocation" value="${persistence.xml.location}" /> <property name="defaultDataSource" ref="dataSource" /> <!-- problem --> <property name="dataSources"> <map> <entry key="local" value-ref="dataSource" /> <entry key="remote" value-ref="dataSourceRemote" /> </map> </property> </bean> <bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"> <property name="jpaVendorAdapter"> <bean class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter" p:showSql="true" p:generateDdl="true"> </bean> </property> <property name="persistenceUnitManager" ref="persistenceUnitManager" /> <property name="persistenceUnitName" value="pu1" /> <property name="dataSource" ref="dataSource" /> </bean> <bean id="entityManagerFactoryRemote" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"> <property name="jpaVendorAdapter"> <bean class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter" p:showSql="true" p:generateDdl="true"> </bean> </property> <property name="persistenceUnitManager" ref="persistenceUnitManager" /> <property name="persistenceUnitName" value="pu2" /> <property name="dataSource" ref="dataSourceRemote" /> </bean> <tx:annotation-driven /> <bean id="transactionManager" class="org.springframework.orm.jpa.JpaTransactionManager" p:entity-manager-factory-ref="entityManagerFactory" /> <bean id="transactionManagerRemote" class="org.springframework.orm.jpa.JpaTransactionManager" p:entity-manager-factory-ref="entityManagerFactoryRemote" /> </beans> persistence.xml <?xml version="1.0" encoding="UTF-8"?> <persistence xmlns="http://java.sun.com/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_1_0.xsd" version="1.0"> <persistence-unit name="pu1" transaction-type="RESOURCE_LOCAL"> <properties> <property name="hibernate.ejb.naming_strategy" value="org.hibernate.cfg.DefaultNamingStrategy" /> <property name="hibernate.dialect" value="${local.hibernate.dialect}" /> <property name="hibernate.hbm2ddl.auto" value="${local.hibernate.hbm2ddl.auto}" /> </properties> </persistence-unit> <persistence-unit name="pu2" transaction-type="RESOURCE_LOCAL"> <properties> <property name="hibernate.ejb.naming_strategy" value="org.hibernate.cfg.DefaultNamingStrategy" /> <property name="hibernate.dialect" value="${remote.hibernate.dialect}" /> <property name="hibernate.hbm2ddl.auto" value="${remote.hibernate.hbm2ddl.auto}" /> </properties> </persistence-unit> </persistence> Now it builds two entityManagerFactory, but both are for Microsoft SQL Server [main] INFO org.hibernate.ejb.Ejb3Configuration - Processing PersistenceUnitInfo [ name: pu1 ...] [main] INFO org.hibernate.cfg.SettingsFactory - RDBMS: Microsoft SQL Server [main] INFO org.hibernate.ejb.Ejb3Configuration - Processing PersistenceUnitInfo [ name: pu2 ...] [main] INFO org.hibernate.cfg.SettingsFactory - RDBMS: Microsoft SQL Server (but must MySQL) I suggest, that use only dataSource, dataSourceRemote (no substitution) is not worked. That's my last problem

    Read the article

  • Preserving case in HTTP headers with Ruby's Net:HTTP

    - by emh
    Although the HTTP spec says that headers are case insensitive; Paypal, with their new adaptive payments API require their headers to be case-sensitive. Using the paypal adaptive payments extension for ActiveMerchant (http://github.com/lamp/paypal_adaptive_gateway) it seems that although the headers are set in all caps, they are sent in mixed case. Here is the code that sends the HTTP request: headers = { "X-PAYPAL-REQUEST-DATA-FORMAT" => "XML", "X-PAYPAL-RESPONSE-DATA-FORMAT" => "JSON", "X-PAYPAL-SECURITY-USERID" => @config[:login], "X-PAYPAL-SECURITY-PASSWORD" => @config[:password], "X-PAYPAL-SECURITY-SIGNATURE" => @config[:signature], "X-PAYPAL-APPLICATION-ID" => @config[:appid] } build_url action request = Net::HTTP::Post.new(@url.path) request.body = @xml headers.each_pair { |k,v| request[k] = v } request.content_type = 'text/xml' proxy = Net::HTTP::Proxy("127.0.0.1", "60723") server = proxy.new(@url.host, 443) server.use_ssl = true server.start { |http| http.request(request) }.body (i added the proxy line so i could see what was going on with Charles - http://www.charlesproxy.com/) When I look at the request headers in charles, this is what i see: X-Paypal-Application-Id ... X-Paypal-Security-Password... X-Paypal-Security-Signature ... X-Paypal-Security-Userid ... X-Paypal-Request-Data-Format XML X-Paypal-Response-Data-Format JSON Accept */* Content-Type text/xml Content-Length 522 Host svcs.sandbox.paypal.com I verified that it is not Charles doing the case conversion by running a similar request using curl. In that test the case was preserved.

    Read the article

  • Help catching AV with WinDbg and ADPlus 7.0

    - by Stoune
    I want to catch Memory Access Violation in SQL Server Compact Edition like this described at http://debuggingblog.com/wp/2009/02/18/memory-access-violation-in-sql-server-compact-editionce/ The suggested config is: CRASH Quiet MyApp.exe NoDumpOnFirstChance clr;av FullDump gn I download latest Debugging Tools and observe what Microsoft rewrite adplus tool into managed code and change syntax of config File. I rewrite config file like this: <ADPlus Version="2"> <Settings> <RunMode>Crash</RunMode> <Option>Quiet</Option> <Option>NoDumpOnFirst</Option> <Sympath>c:\symbols\</Sympath> <OutputDir>c:\work\output\</OutputDir> <ProcessName>c:\work\app\output\MyApp.exe</ProcessName> </Settings> <Exceptions><!--to get the full dump on clr access violation--> <Exception Code="clr;av"> <Actions1>FullDump</Actions1> <ReturnAction1>gn</ReturnAction1> </Exception> </Exceptions> </ADPlus> And I get error "Couldn't find exception with code: clr;av". If I understand right It didn't load sos extension, but I can't find the right section and syntax that I should use to load it. adplus_old.vbs - for some reasons didn't launch process on Windows 7. WinDBG 6.12.0002.633 X86 ADPlus Engine Version: 7.01.002 02/27/2009 Maybe someone has a working example of config of debugging .NET app with latest adplus.exe?

    Read the article

  • Default Transaction Timeout

    - by MattH
    I used to set Transaction timeouts by using TransactionOptions.Timeout, but have decided for ease of maintenance to use the config approach: <system.transactions> <defaultSettings timeout="00:01:00" /> </system.transactions> Of course, after putting this in, I wanted to test it was working, so reduced the timeout to 5 seconds, then ran a test that lasted longer than this - but the transaction does not appear to abort! If I adjust the test to set TransactionOptions.Timeout to 5 seconds, the test works as expected After Investigating I think the problem appears to relate to TransactionOptions.Timeout, even though I'm no longer using it. I still need to use TransactionOptions so I can set IsolationLevel, but I no longer set the Timeout value, if I look at this object after I create it, the timeout value is 00:00:00, which equates to infinity. Does this mean my value set in the config file is being ignored? To summarise: Is it impossible to mix the config setting, and use of TransactionOptions If not, is there any way to extract the config setting at runtime, and use this to set the Timeout property [Edit] OR Set the default isolation-level without using TransactionOptions

    Read the article

  • Grails Mail port configuration

    - by bsreekanth
    Hello, I am trying to send mail through grails mail plugin. I configured according to the documentation, and also followed few blog posts (http://blog.lourish.com/2010/04/02/sending-asynchronous-html-email-in-grails-with-activemq-jms-and-gmail/). That post mention that the closure way of declaring the configuration overrides others, but not true. Anyway I tried both approach, but seems like the port is still use the smtp default one. I get the below exception. exception: org.springframework.mail.MailSendException: Mail server connection failed; nested exception is javax.mail.MessagingException: Could not connect to SMTP host: localhost, port: 25; nested exception is: java.net.ConnectException: Connection refused: connect Now, I wrote a small program directly using the java mail library, and I could send the mail with that. The configuration is shown below. tried additional config "mail.smtp.port":"465"", but no change.. used the parameters mentioned in the above blog post, result same grails { mail { host = "smtp.gmail.com" port = "465" username = "[email protected]" password = "mypwd" props = ["mail.smtp.auth":"true", // "mail.smtp.port":"465", "mail.smtp.socketFactory.port":"465", "mail.smtp.socketFactory.class":"javax.net.ssl.SSLSocketFactory", "mail.smtp.socketFactory.fallback":"false"] } } thanks in advance.. Update: It is not port or firewall config, as when I made a grails application from scratch, and tried with the same config, everything works. Also, asked in grails forum http://grails.1312388.n4.nabble.com/grails-mail-mailSender-does-not-have-config-values-td2237704.html#a2237704 . Hope get a lead to try.

    Read the article

  • Auth problem on Facebook using Ruby/sinatra/frankie/facebooker

    - by user84584
    Hello guys, I'm using sinatra/frankie/facebooker to prototype something simple to test the facebook api, i'm using mmangino-facebooker the more recent version from github and I cloned the most recent version of frankie. I'm using sinatra 0.9.6. My main code is as simple as possible: before do ensure_application_is_installed_by_facebook_user @user = session[:facebook_session].user @photos = session[:facebook_session].get_photos(nil,@user.uid,nil) end get "/" do erb :index end get "/:uid/:image" do |uid,image| @photo_selected = session[:facebook_session].get_photos([image.to_i],nil,nil) erb :selected end The index page just renders a link to the other one (identified by regex "/:uid/:image") however I always get an error when it's trying to render the one identified by regex "/:uid/:image" Facebooker::Session::MissingOrInvalidParameter: Invalid parameter /Library/Ruby/Gems/1.8/gems/mmangino-facebooker-1.0.50/lib/facebooker/parser.rb:610:in `process' /Library/Ruby/Gems/1.8/gems/mmangino-facebooker-1.0.50/lib/facebooker/parser.rb:30:in `parse' /Library/Ruby/Gems/1.8/gems/mmangino-facebooker-1.0.50/lib/facebooker/service.rb:67:in `post' /Library/Ruby/Gems/1.8/gems/mmangino-facebooker-1.0.50/lib/facebooker/session.rb:600:in `post_without_logging' /Library/Ruby/Gems/1.8/gems/mmangino-facebooker-1.0.50/lib/facebooker/session.rb:611:in `post' /Library/Ruby/Gems/1.8/gems/mmangino-facebooker-1.0.50/lib/facebooker/logging.rb:20:in `log_fb_api' /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/benchmark.rb:308:in `realtime' /Library/Ruby/Gems/1.8/gems/mmangino-facebooker-1.0.50/lib/facebooker/logging.rb:20:in `log_fb_api' /Library/Ruby/Gems/1.8/gems/mmangino-facebooker-1.0.50/lib/facebooker/session.rb:610:in `post' /Library/Ruby/Gems/1.8/gems/mmangino-facebooker-1.0.50/lib/facebooker/session.rb:198:in `secure!' ./config/frankie/lib/frankie.rb:66:in `secure_with_token!' ./config/frankie/lib/frankie.rb:44:in `set_facebook_session' ./config/frankie/lib/frankie.rb:164:in `ensure_authenticated_to_facebook' ./config/frankie/lib/frankie.rb:169:in `ensure_application_is_installed_by_facebook_user' I've no idea why, it seems to be related with the auth token I guess.. I logged the request made o the fb rest server: {:sig="4f244d1f510498f4efaae3c03d036a85", :generate_session_secret="0", :method="facebook.auth.getSession", :auth_token="9dae0d02c19c680b574c78d202b0582a", :api_key="70c14732815ace0ae71a561ea5eb38b7", :v="1.0"} {:call_id="1269003766.05665", :sig="194469457d1424dc8ba0678979692363", :method="facebook.photos.get", :subj_id=750401957, :session_key="2.lXL0z3s4_r573xzQwAiA9A__.3600.1269010800-750401957", :api_key="70c14732815ace0ae71a561ea5eb38b7", :v="1.0"} {:sig="4f244d1f510498f4efaae3c03d036a85", :generate_session_secret="0", :method="facebook.auth.getSession", :auth_token="9dae0d02c19c680b574c78d202b0582a", :api_key="70c14732815ace0ae71a561ea5eb38b7", :v="1.0"} The last one gives the error, it could be related with auth_token having the same value in the 1st and on the 3rd ? Cheers and tks, Ze Maria

    Read the article

  • ServiceRoute + WebServiceHostFactory kills WSDL generation? How to create extensionless WCF service

    - by Ethan J. Brown
    I'm trying to use extenionless / .svc-less WCF services. Can anyone else confirm or deny the issue I'm experiencing? I use routing in code, and do this in Application_Start of global.asax.cs: RouteTable.Routes.Add(new ServiceRoute("Data", new WebServiceHostFactory(), typeof(DataDips))); I have tested in both IIS 6 and IIS 7.5 and I can use the service just fine (ie my extensionless handler is correctly configured for ASP.NET). However, metadata generation is totally screwed up. I can hit my /mex endpoint with the WCF Test Client (and I presume svcutil.exe) -- but the ?wsdl generation you typically get with .svc is toast. I can't hit it with a browser (get 400 bad request), I can't hit it with wsdl.exe, etc. Metadata generation is configured correctly in web.config. This is a problem of course, because the service is exposed as basicHttpBinding so that an old style ASMX client can get to it. But of course, the client can't generate the proxy without a WSDL description. If I instead use serviceActivation routing in config like this, rather than registering a route in code: <serviceHostingEnvironment aspNetCompatibilityEnabled="true"> <serviceActivations> <add relativeAddress="Data.svc" service="DataDips" /> </serviceActivations> </serviceHostingEnvironment> Then voila... it works. But then I don't have a clean extensionless url. If I change relativeAddress from Data.svc to Data, then I get a configuration exception as this is not supported by config. (Must use an extension registered to WCF). I've also attempted to use this code in conjunction with the above config: RouteTable.Routes.MapPageRoute("","Data/{*data}","~/Data.svc/{*data}",false); My thinking is that I can just point the extensionless url at the configured .svc url. This doesn't work -- the /Data.svc continues to work, but /Data returns a 404. Anyone with any bright ideas?

    Read the article

  • How would you implement API key in WCF Data Service?

    - by rushonerok
    Is there a way to require an API key in the URL / or some other way of passing the service a private key in order to grant access to the data? I have this right now... using System; using System.Data.Services; using System.Data.Services.Common; using System.Collections.Generic; using System.Linq; using System.ServiceModel.Web; using Numina.Framework; using System.Web; using System.Configuration; [System.ServiceModel.ServiceBehavior(IncludeExceptionDetailInFaults = true)] public class odata : DataService { public static void InitializeService(DataServiceConfiguration config) { config.SetEntitySetAccessRule("*", EntitySetRights.AllRead); //config.SetServiceOperationAccessRule("*", ServiceOperationRights.All); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; } protected override void OnStartProcessingRequest(ProcessRequestArgs args) { HttpRequest Request = HttpContext.Current.Request; if(Request["apikey"] != ConfigurationManager.AppSettings["ApiKey"]) throw new DataServiceException("ApiKey needed"); base.OnStartProcessingRequest(args); } } ...This works but it's not perfect because you cannot get at the metadata and discover the service through the Add Service Reference explorer. I could check if $metadata is in the url but it seems like a hack. Is there a better way?

    Read the article

  • Boost in Visual Studio 2010, IntelliSense error

    - by Peretz
    Hello, I would like to see if you could orient me. It happens that I compiled and referenced the boost libraries in order to use them with Visual Studio 2010. When building my test project I get these two IntelliSense errors 1 IntelliSense: #error directive: "Macro BOOST_LIB_NAME not set (internal error)" c:\boost_1_43_0\boost\config\auto_link.hpp 2 IntelliSense: #error directive: "some required macros where not defined (internal logic error)." c:\boost_1_43_0\boost\config\auto_link.hpp Checking the auto_link.hpp header file the first error is in this line #ifndef BOOST_LIB_NAME # error "Macro BOOST_LIB_NAME not set (internal error)" #endif Tracing the definition of BOOST_LIB_NAME, it seems that is defined in config.hpp by boost_regex, which code I am including below #if !defined(BOOST_REGEX_NO_LIB) && !defined(BOOST_REGEX_SOURCE) && !defined(BOOST_ALL_NO_LIB) && defined(__cplusplus) # define BOOST_LIB_NAME boost_regex # if defined(BOOST_REGEX_DYN_LINK) || defined(BOOST_ALL_DYN_LINK) # define BOOST_DYN_LINK ... more code and strangely when I point to BOOST_LIB_NAME it defines BOOST_LIB_NAME and the IntelliSense errors disappear. My program builds and executes fine using the Boost:Regex library -- with or without the Intellisense errors; however, I do not understand why these IntelliSense errors appear in the first place, and second why pointing the macro in the config.hpp defines BOOST_LIB_NAME. Any guidance will be greatly appreciated. Thanks, Jaime

    Read the article

  • Can YAML have inheritance?

    - by Jason
    This question involves a lot of symfony but it should be easy enough for someone to follow who only knows YAML and not symfony. My symfony models come from a three-step process: First, I create the tables in MySQL. Second, I run a symfony command (symfony doctrine:build-schema) to convert my table structure into a YAML file. Third, I run another symfony command (symfony doctrine:build-model) to convert the YAML file into PHP code. Here's the problem: there are some tables in the database that I don't want to end up in my symfony code. For example, let's say I have two tables: one called my_table and another called wordpress. The YAML file I end up with might look like this: MyTable: connection: doctrine tableName: my_table Wordpress: connection: doctrine tableName: wordpress That's great except the wordpress table has nothing to do with my symfony models. The result is that every single time I make a change to my database and generate this YAML file, I have to manually remove wordpress. It's annoying! I'd like to be able to create a file called baseConfig.php or something that looks like this: $config = array( 'MyTable' => array( 'connection' => 'doctrine', 'tableName' => 'my_table', ), 'Wordpress' => array( 'connection' => 'doctrine', 'tableName' => 'wordpress', ), ); And then I could have a separate file called config.php or something where I could make modifications to the base config: unset($config['Wordpress']); So my question is: is there any way to convert YAML into executable PHP code (as opposed to load YAML INTO PHP code like what sfYaml::load() does) to achieve this sort of thing? Or is there maybe some other way to achieve YAML inheritance? Thanks, Jason

    Read the article

  • NHibernate.Search - async mode

    - by Atul
    Hi, I am using NHibernate Lucene search in my project. Lucene.Net.dll - v - 2.3.1.3 NHibernate.dll - v - 2.1.0.4000 At this point I am trying to use async option for indexing and used following options config.SetProperty(NHibernate.Search.Environment.WorkerExecution, "async"); config.SetProperty(NHibernate.Search.Environment.WorkerThreadPoolSize, "1"); config.SetProperty(NHibernate.Search.Environment.WorkerWorkQueueSize, "5000"); Questions 1) My initial index was not build with this option, when used these settings first time, I had error saying NHibernate.Search.dll not found. When I deleted existing index and then started working, it went fine. Do we need to rebuild indexes whenever we change config settings like above ? 2) How size of index should be interpreted; i.e. initially my index was about 400MB (build over the last few months), which I deleted. Later when I reindexed, the size of index went down to 5MB ! Search appear to be alright after limited testing, but such a change appeared bit scary. Should we delete/rebuild indexes once in a while & is it normal to change this drastically ? 3) Is my above setting is OK ? When I had WorkerThreadPoolSize=5, I once got Dr Watson kind of error. Please advise on best practices of using async configuration for search. Regards, Atul

    Read the article

  • How do I manage application configuration in ASP.NET?

    - by GlennS
    I am having difficulty with managing configuration of an ASP.Net application to deploy for different clients. The sheer volume of different settings which need twiddling takes up large amounts of time, and the current configuration methods are too complicated to enable us to push this responsibility out to support partners. Any suggestions for better methods to handle this or good sources of information to research? How we do things at present: Various xml configuration files which are referenced in Web.Config, for example an AppSettings.xml. Configurations for specific sites are kept in duplicate configuration files. Text files containing lists of data specific to the site In some cases, manual one-off changes to the database C# configuration for Windsor IOC. The specific issues we are having: Different sites with different features enabled, different external services we have to talk to and different business rules. Different deployment types (live, test, training) Configuration keys change across versions (get added, remove), meaning we have to update all the duplicate files We still need to be able to alter keys while the application is running Our current thoughts on how we might approach this are: Move the configuration into dynamically compiled code (possibly Boo, Binsor or JavaScript) Have some form of diffing/merging configuration: combine a default config with a live/test/training config and a site-specific config

    Read the article

  • Smack API giving error while logging into Tigase Server setup locally

    - by Ameya Phadke
    Hi, I am currently developing android XMPP client to communicate with the Tigase server setup locally.Before starting development on Android I am writing a simple java code on PC to test connectivity with XMPP server.My XMPP domain is my pc name "mwbn43-1" and administrator username and passwords are admin and tigase respectively. Following is the snippet of the code I am using class Test { public static void main(String args[])throws Exception { System.setProperty("smack.debugEnabled", "true"); XMPPConnection.DEBUG_ENABLED = true; ConnectionConfiguration config = new ConnectionConfiguration("mwbn43-1", 5222); config.setCompressionEnabled(true); config.setSASLAuthenticationEnabled(true); XMPPConnection con = new XMPPConnection(config); // Connect to the server con.connect(); con.login("admin", "tigase"); Chat chat = con.getChatManager().createChat("aaphadke@mwbn43-1", new MessageListener() { public void processMessage(Chat chat, Message message) { // Print out any messages we get back to standard out. System.out.println("Received message: " + message); } }); try { chat.sendMessage("Hi!"); } catch (XMPPException e) { System.out.println("Error Delivering block"); } String host = con.getHost(); String user = con.getUser(); String id = con.getConnectionID(); int port = con.getPort(); boolean i = false; i = con.isConnected(); if (i) System.out.println("Connected to host " + host + " via port " + port + " connection id is " + id); System.out.println("User is " + user); con.disconnect(); } } When I run this code I get following error Exception in thread "main" Resource binding not offered by server: at org.jivesoftware.smack.SASLAuthentication.bindResourceAndEstablishSession(SASLAuthenticatio n.java:416) at org.jivesoftware.smack.SASLAuthentication.authenticate(SASLAuthentication.java:331) at org.jivesoftware.smack.XMPPConnection.login(XMPPConnection.java:395) at org.jivesoftware.smack.XMPPConnection.login(XMPPConnection.java:349) at Test.main(Test.java:26) I found this articles on the same problem but no concrete solution here Could anyone please tell me the solution for this problem.I checked the XMPPConnection.java file in the Smack API and it looks the same as given in the link solution. Thanks, Ameya

    Read the article

< Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >