Search Results

Search found 25852 results on 1035 pages for 'linq query syntax'.

Page 444/1035 | < Previous Page | 440 441 442 443 444 445 446 447 448 449 450 451  | Next Page >

  • Setting up a DNS name server for a mass virtual host with Bind9

    - by Dez
    I am trying to set up a chrooted DNS name server in a local LAN like this everyone connected in the LAN can have access to the mass virtual hosts defined for a development ambience without having to edit manually their local /etc/hosts one by one. The mass virtual host is named example.user.dev (VirtualDocumentRoot /home/user/example ) and example.test (DocumentRoot /var/www/example). I set up everything and the /var/log/syslog doesn't show any error, but when checking the DNS with: host -v example.test Doesn't find the host. Also using the dig command I don't receive answer. dig -x example.test ; << DiG 9.5.1-P3 << -x imprimere ;; global options: printcmd ;; Got answer: ;; -HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 47844 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0 ;; QUESTION SECTION: ;imprimere.in-addr.arpa. IN PTR ;; AUTHORITY SECTION: in-addr.arpa. 600 IN SOA a.root-servers.net. dns-ops.arin.net. 2010042604 1800 900 691200 10800 ;; Query time: 108 msec ;; SERVER: 80.58.0.33#53(80.58.0.33) ;; WHEN: Mon Apr 26 11:15:53 2010 ;; MSG SIZE rcvd: 107 My configuration is the following: /etc/bind/named.conf.local zone "example.test" { type master; allow-query { any; }; file "/etc/bind/zones/master_example.test"; notify yes; }; zone "1.168.192.in-addr.arpa" { type master; allow-query { any; }; file "/etc/bind/zones/master_1.168.192.in-addr.arpa"; notify yes; }; /etc/bind/named.conf.options Note: We have an static IP address so I forward the querys to DNS server to said IP address. options{ directory "/var/cache/bind"; forwarders { 80.34.100.160; }; auth-nxdomain no; listen-on-v6 { any; }; }; /etc/bind/zones/master_example.test $ORIGIN example.test. $TTL 86400 @ IN SOA example.test. root.example.test. ( 201004227 ; serial 28800 ; refresh 14400 ; retry 3600000 ; expire 86400 ) ; min ; TXT "example.test, DNS service" @ IN NS example.test. localhost A 127.0.0.1 example.test. A 192.168.1.52 example A 192.168.1.52 www CNAME example.test. /etc/hosts 127.0.0.1 localhost example 192.168.1.52 localhost example example.test /etc/resolv.conf Note: For Bind I just added the 3 last lines. nameserver 80.58.0.33 nameserver 80.58.61.250 nameserver 80.58.61.254 search example.test search example nameserver 192.168.1.52

    Read the article

  • dns server bind is not work

    - by milad
    I just installed bind on RHEL 6 and point a domain to that server. but actually when i ping domain it returns error 1214: Here is my named.conf: // // named.conf // // Provided by Red Hat bind package to configure the ISC BIND named(8) DNS // server as a caching only nameserver (as a localhost DNS resolver only). // // See /usr/share/doc/bind*/sample/ for example named configuration files. // options { listen-on port 53 { any; }; listen-on-v6 port 53 { ::1; }; directory "/var/named"; dump-file "/var/named/data/cache_dump.db"; statistics-file "/var/named/data/named_stats.txt"; memstatistics-file "/var/named/data/named_mem_stats.txt"; allow-query { any; }; recursion yes; dnssec-enable yes; dnssec-validation yes; dnssec-lookaside auto; /* Path to ISC DLV key */ bindkeys-file "/etc/named.iscdlv.key"; managed-keys-directory "/var/named/dynamic"; }; logging { channel default_debug { file "data/named.run"; severity dynamic; }; }; zone "." IN { type hint; file "named.ca"; }; include "/etc/named.rfc1912.zones"; include "/etc/named.root.key"; zone "mydomain.com"{ type master; file "/var/named/data/named.mydomain.com"; allow-update { none; }; };` AND The content of "/var/named/data/named.mydomain.com": $TTL 38400 mydomain.com. IN SOA ns1.mydomain.com. milad.yahoo.com. ( 2012101201 ; serial number YYMMDDNN 28800 ; Refresh 7200 ; Retry 864000 ; Expire 38400 ; Min TTL ) mydomain.com. IN A 1.2.3.4 www IN A 1.2.3.4 ns1.mydomain.com. IN A 1.2.3.4 ns2.mydomain.com. IN A 1.2.3.4 mydomain.com. IN NS ns1.mydomain.com. mydomain.com. IN NS ns2.mydomain.com. AND i'm sure the named service is running: [root@server ~]# service named status version: 9.8.2rc1-RedHat-9.8.2-0.10.rc1.el6_3.3 CPUs found: 8 worker threads: 8 number of zones: 20 debug level: 0 xfers running: 0 xfers deferred: 0 soa queries in progress: 0 query logging is OFF recursive clients: 0/0/1000 tcp clients: 0/100 server is up and running named (pid 26299) is running... Thanks for your answers. i know that the ping is not the job of bind, i use it just to check whether domain is pointed to host or not.(ping is open in my server as i got reply in pinging ip) i use network-tools.com to ping domain. here the output of dig utility: dig mydomain.com ; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.10.rc1.el6_3.3 <<>> mydomain.com ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 6806 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;mydomain.com. IN A ;; Query time: 321 msec ;; SERVER: 5.6.7.8#53(5.6.7.8)##note that 5.6.7.8 is my idc dns ip ;; WHEN: Sun Oct 14 23:53:47 2012

    Read the article

  • mySQL Optimization Suggestions

    - by Brian Schroeter
    I'm trying to optimize our mySQL configuration for our large Magento website. The reason I believe that mySQL needs to be configured further is because New Relic has shown that our SELECT queries are taking a long time (20,000+ ms) in some categories. I ran MySQLTuner 1.3.0 and got the following results... (Disclaimer: I restarted mySQL earlier after tweaking some settings, and so the results here may not be 100% accurate): >> MySQLTuner 1.3.0 - Major Hayden <[email protected]> >> Bug reports, feature requests, and downloads at http://mysqltuner.com/ >> Run with '--help' for additional options and output filtering [OK] Currently running supported MySQL version 5.5.37-35.0 [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: +ARCHIVE +BLACKHOLE +CSV -FEDERATED +InnoDB +MRG_MYISAM [--] Data in MyISAM tables: 7G (Tables: 332) [--] Data in InnoDB tables: 213G (Tables: 8714) [--] Data in PERFORMANCE_SCHEMA tables: 0B (Tables: 17) [--] Data in MEMORY tables: 0B (Tables: 353) [!!] Total fragmented tables: 5492 -------- Security Recommendations ------------------------------------------- [!!] User '@host5.server1.autopartsnetwork.com' has no password set. [!!] User '@localhost' has no password set. [!!] User 'root@%' has no password set. -------- Performance Metrics ------------------------------------------------- [--] Up for: 5h 3m 4s (5M q [317.443 qps], 42K conn, TX: 18B, RX: 2B) [--] Reads / Writes: 95% / 5% [--] Total buffers: 35.5G global + 184.5M per thread (1024 max threads) [!!] Maximum possible memory usage: 220.0G (174% of installed RAM) [OK] Slow queries: 0% (6K/5M) [OK] Highest usage of available connections: 5% (61/1024) [OK] Key buffer size / total MyISAM indexes: 512.0M/3.1G [OK] Key buffer hit rate: 100.0% (102M cached / 45K reads) [OK] Query cache efficiency: 66.9% (3M cached / 5M selects) [!!] Query cache prunes per day: 3486361 [OK] Sorts requiring temporary tables: 0% (0 temp sorts / 812K sorts) [!!] Joins performed without indexes: 1328 [OK] Temporary tables created on disk: 11% (126K on disk / 1M total) [OK] Thread cache hit rate: 99% (61 created / 42K connections) [!!] Table cache hit rate: 19% (9K open / 49K opened) [OK] Open file limit used: 2% (712/25K) [OK] Table locks acquired immediately: 100% (5M immediate / 5M locks) [!!] InnoDB buffer pool / data size: 32.0G/213.4G [OK] InnoDB log waits: 0 -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance MySQL started within last 24 hours - recommendations may be inaccurate Reduce your overall MySQL memory footprint for system stability Enable the slow query log to troubleshoot bad queries Increasing the query_cache size over 128M may reduce performance Adjust your join queries to always utilize indexes Increase table_cache gradually to avoid file descriptor limits Read this before increasing table_cache over 64: http://bit.ly/1mi7c4C Variables to adjust: *** MySQL's maximum memory usage is dangerously high *** *** Add RAM before increasing MySQL buffer variables *** query_cache_size (> 512M) [see warning above] join_buffer_size (> 128.0M, or always use indexes with joins) table_cache (> 12288) innodb_buffer_pool_size (>= 213G) My my.cnf configuration is as follows... [client] port = 3306 [mysqld_safe] nice = 0 [mysqld] tmpdir = /var/lib/mysql/tmp user = mysql port = 3306 skip-external-locking character-set-server = utf8 collation-server = utf8_general_ci event_scheduler = 0 key_buffer = 512M max_allowed_packet = 64M thread_stack = 512K thread_cache_size = 512 sort_buffer_size = 24M read_buffer_size = 8M read_rnd_buffer_size = 24M join_buffer_size = 128M # for some nightly processes client sessions set the join buffer to 8 GB auto-increment-increment = 1 auto-increment-offset = 1 myisam-recover = BACKUP max_connections = 1024 # max connect errors artificially high to support behaviors of NetScaler monitors max_connect_errors = 999999 concurrent_insert = 2 connect_timeout = 5 wait_timeout = 180 net_read_timeout = 120 net_write_timeout = 120 back_log = 128 # this table_open_cache might be too low because of MySQL bugs #16244691 and #65384) table_open_cache = 12288 tmp_table_size = 512M max_heap_table_size = 512M bulk_insert_buffer_size = 512M open-files-limit = 8192 open-files = 1024 query_cache_type = 1 # large query limit supports SOAP and REST API integrations query_cache_limit = 4M # larger than 512 MB query cache size is problematic; this is typically ~60% full query_cache_size = 512M # set to true on read slaves read_only = false slow_query_log_file = /var/log/mysql/slow.log slow_query_log = 0 long_query_time = 0.2 expire_logs_days = 10 max_binlog_size = 1024M binlog_cache_size = 32K sync_binlog = 0 # SSD RAID10 technically has a write capacity of 10000 IOPS innodb_io_capacity = 400 innodb_file_per_table innodb_table_locks = true innodb_lock_wait_timeout = 30 # These servers have 80 CPU threads; match 1:1 innodb_thread_concurrency = 48 innodb_commit_concurrency = 2 innodb_support_xa = true innodb_buffer_pool_size = 32G innodb_file_per_table innodb_flush_log_at_trx_commit = 1 innodb_log_buffer_size = 2G skip-federated [mysqldump] quick quote-names single-transaction max_allowed_packet = 64M I have a monster of a server here to power our site because our catalog is very large (300,000 simple SKUs), and I'm just wondering if I'm missing anything that I can configure further. :-) Thanks!

    Read the article

  • NDepend tool – Why every developer working with Visual Studio.NET must try it!

    - by hajan
    In the past two months, I have had a chance to test the capabilities and features of the amazing NDepend tool designed to help you make your .NET code better, more beautiful and achieve high code quality. In other words, this tool will definitely help you harmonize your code. I mean, you’ve probably heard about Chaos Theory. Experienced developers and architects are already advocates of the programming chaos that happens when working with complex project architecture, the matrix of relationships between objects which simply even if you are the one who have written all that code, you know how hard is to visualize everything what does the code do. When the application get more and more complex, you will start missing a lot of details in your code… NDepend will help you visualize all the details on a clever way that will help you make smart moves to make your code better. The NDepend tool supports many features, such as: Code Query Language – which will help you write custom rules and query your own code! Imagine, you want to find all your methods which have more than 100 lines of code :)! That’s something simple! However, I will dig much deeper in one of my next blogs which I’m going to dedicate to the NDepend’s CQL (Code Query Language) Architecture Visualization – You are an architect and want to visualize your application’s architecture? I’m thinking how many architects will be really surprised from their architectures since NDepend shows your whole architecture showing each piece of it. NDepend will show you how your code is structured. It shows the architecture in graphs, but if you have very complex architecture, you can see it in Dependency Matrix which is more suited to display large architecture Code Metrics – Using NDepend’s panel, you can see the code base according to Code Metrics. You can do some additional filtering, like selecting the top code elements ordered by their current code metric value. You can use the CQL language for this purpose too. Smart Search – NDepend has great searching ability, which is again based on the CQL (Code Query Language). However, you have some options to search using dropdown lists and text boxes and it will generate the appropriate CQL code on fly. Moreover, you can modify the CQL code if you want it to fit some more advanced searching tasks. Compare Builds and Code Difference – NDepend will also help you compare previous versions of your code with the current one at one of the most clever ways I’ve seen till now. Create Custom Rules – using CQL you can create custom rules and let NDepend warn you on each build if you break a rule Reporting – NDepend can automatically generate reports with detailed stats, graph representation, dependency matrixes and some additional advanced reporting features that will simply explain you everything related to your application’s code, architecture and what you’ve done. And that’s not all. As I’ve seen, there are many other features that NDepend supports. I will dig more in the upcoming days and will blog more about it. The team who built the NDepend have also created good documentation, which you can find on the NDepend website. On their website, you can also find some good videos that will help you get started quite fast. It’s easy to install and what is very important it is fully integrated with Visual Studio. To get you started, you can watch the following Getting Started Online Demo and Tutorial with explanations and screenshots. If you are interested to know more about how to use the features of this tool, either visit their website or wait for my next blogs where I will show some real examples of using the tool and how it helps make your code better. And the last thing for this blog, I would like to copy one sentence from the NDepend’s home page which says: ‘Hence the software design becomes concrete, code reviews are effective, large refactoring are easy and evolution is mastered.’ Website: www.ndepend.com Getting Started: http://www.ndepend.com/GettingStarted.aspx Features: http://www.ndepend.com/Features.aspx Download: http://www.ndepend.com/NDependDownload.aspx Hope you like it! Please do let me know your feedback by providing comments to my blog post. Kind Regards, Hajan

    Read the article

  • SQL Server IO handling mechanism can be severely affected by high CPU usage

    - by sqlworkshops
    Are you using SSD or SAN / NAS based storage solution and sporadically observe SQL Server experiencing high IO wait times or from time to time your DAS / HDD becomes very slow according to SQL Server statistics? Read on… I need your help to up vote my connect item – https://connect.microsoft.com/SQLServer/feedback/details/744650/sql-server-io-handling-mechanism-can-be-severely-affected-by-high-cpu-usage. Instead of taking few seconds, queries could take minutes/hours to complete when CPU is busy.In SQL Server when a query / request needs to read data that is not in data cache or when the request has to write to disk, like transaction log records, the request / task will queue up the IO operation and wait for it to complete (task in suspended state, this wait time is the resource wait time). When the IO operation is complete, the task will be queued to run on the CPU. If the CPU is busy executing other tasks, this task will wait (task in runnable state) until other tasks in the queue either complete or get suspended due to waits or exhaust their quantum of 4ms (this is the signal wait time, which along with resource wait time will increase the overall wait time). When the CPU becomes free, the task will finally be run on the CPU (task in running state).The signal wait time can be up to 4ms per runnable task, this is by design. So if a CPU has 5 runnable tasks in the queue, then this query after the resource becomes available might wait up to a maximum of 5 X 4ms = 20ms in the runnable state (normally less as other tasks might not use the full quantum).In case the CPU usage is high, let’s say many CPU intensive queries are running on the instance, there is a possibility that the IO operations that are completed at the Hardware and Operating System level are not yet processed by SQL Server, keeping the task in the resource wait state for longer than necessary. In case of an SSD, the IO operation might even complete in less than a millisecond, but it might take SQL Server 100s of milliseconds, for instance, to process the completed IO operation. For example, let’s say you have a user inserting 500 rows in individual transactions. When the transaction log is on an SSD or battery backed up controller that has write cache enabled, all of these inserts will complete in 100 to 200ms. With a CPU intensive parallel query executing across all CPU cores, the same inserts might take minutes to complete. WRITELOG wait time will be very high in this case (both under sys.dm_io_virtual_file_stats and sys.dm_os_wait_stats). In addition you will notice a large number of WAITELOG waits since log records are written by LOG WRITER and hence very high signal_wait_time_ms leading to more query delays. However, Performance Monitor Counter, PhysicalDisk, Avg. Disk sec/Write will report very low latency times.Such delayed IO handling also occurs to read operations with artificially very high PAGEIOLATCH_SH wait time (with number of PAGEIOLATCH_SH waits remaining the same). This problem will manifest more and more as customers start using SSD based storage for SQL Server, since they drive the CPU usage to the limits with faster IOs. We have a few workarounds for specific scenarios, but we think Microsoft should resolve this issue at the product level. We have a connect item open – https://connect.microsoft.com/SQLServer/feedback/details/744650/sql-server-io-handling-mechanism-can-be-severely-affected-by-high-cpu-usage - (with example scripts) to reproduce this behavior, please up vote the item so the issue will be addressed by the SQL Server product team soon.Thanks for your help and best regards,Ramesh MeyyappanHome: www.sqlworkshops.comLinkedIn: http://at.linkedin.com/in/rmeyyappan

    Read the article

  • Advanced Record-Level Business Intelligence with Inner Queries

    - by gt0084e1
    While business intelligence is generally applied at an aggregate level to large data sets, it's often useful to provide a more streamlined insight into an individual records or to be able to sort and rank them. For instance, a salesperson looking at a specific customer could benefit from basic stats on that account. A marketer trying to define an ideal customer could pull the top entries and look for insights or patterns. Inner queries let you do sophisticated analysis without the overhead of traditional BI or OLAP technologies like Analysis Services. Example - Order History Constancy Let's assume that management has realized that the best thing for our business is to have customers ordering every month. We'll need to identify and rank customers based on how consistently they buy and when their last purchase was so sales & marketing can respond accordingly. Our current application may not be able to provide this and adding an OLAP server like SSAS may be overkill for our needs. Luckily, SQL Server provides the ability to do relatively sophisticated analytics via inner queries. Here's the kind of output we'd like to see. Creating the Queries Before you create a view, you need to create the SQL query that does the calculations. Here we are calculating the total number of orders as well as the number of months since the last order. These fields might be very useful to sort by but may not be available in the app. This approach provides a very streamlined and high performance method of delivering actionable information without radically changing the application. It's also works very well with self-service reporting tools like Izenda. SELECT CustomerID,CompanyName, ( SELECT COUNT(OrderID) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID ) As Orders, DATEDIFF(mm, ( SELECT Max(OrderDate) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID) ,getdate() ) AS MonthsSinceLastOrder FROM Customers Creating Views To turn this or any query into a view, just put CREATE VIEW AS before it. If you want to change it use the statement ALTER VIEW AS. Creating Computed Columns If you'd prefer not to create a view, inner queries can also be applied by using computed columns. Place you SQL in the (Formula) field of the Computed Column Specification or check out this article here. Advanced Scoring and Ranking One of the best uses for this approach is to score leads based on multiple fields. For instance, you may be in a business where customers that don't order every month require more persistent follow up. You could devise a simple formula that shows the continuity of an account. If they ordered every month since their first order, they would be at 100 indicating that they have been ordering 100% of the time. Here's the query that would calculate that. It uses a few SQL tricks to make this happen. We are extracting the count of unique months and then dividing by the months since initial order. This query will give you the following information which can be used to help sales and marketing now where to focus. You could sort by this percentage to know where to start calling or to find patterns describing your best customers. Number of orders First Order Date Last Order Date Percentage of months order was placed since last order. SELECT CustomerID, (SELECT COUNT(OrderID) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID) As Orders, (SELECT Max(OrderDate) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID) AS LastOrder, (SELECT Min(OrderDate) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID) AS FirstOrder, DATEDIFF(mm,(SELECT Min(OrderDate) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID),getdate()) AS MonthsSinceFirstOrder, 100*(SELECT COUNT(DISTINCT 100*DATEPART(yy,OrderDate) + DATEPART(mm,OrderDate)) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID) / DATEDIFF(mm,(SELECT Min(OrderDate) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID),getdate()) As OrderPercent FROM Customers

    Read the article

  • SQL Server Optimizer Malfunction?

    - by Tony Davis
    There was a sharp intake of breath from the audience when Adam Machanic declared the SQL Server optimizer to be essentially "stuck in 1997". It was during his fascinating "Query Tuning Mastery: Manhandling Parallelism" session at the recent PASS SQL Summit. Paraphrasing somewhat, Adam (blog | @AdamMachanic) offered a convincing argument that the optimizer often delivers flawed plans based on assumptions that are no longer valid with today’s hardware. In 1997, when Microsoft engineers re-designed the database engine for SQL Server 7.0, SQL Server got its initial implementation of a cost-based optimizer. Up to SQL Server 2000, the developer often had to deploy a steady stream of hints in SQL statements to combat the occasionally wilful plan choices made by the optimizer. However, with each successive release, the optimizer has evolved and improved in its decision-making. It is still prone to the occasional stumble when we tackle difficult problems, join large numbers of tables, perform complex aggregations, and so on, but for most of us, most of the time, the optimizer purrs along efficiently in the background. Adam, however, challenged further any assumption that the current optimizer is competent at providing the most efficient plans for our more complex analytical queries, and in particular of offering up correctly parallelized plans. He painted a picture of a present where complex analytical queries have become ever more prevalent; where disk IO is ever faster so that reads from disk come into buffer cache faster than ever; where the improving RAM-to-data ratio means that we have a better chance of finding our data in cache. Most importantly, we have more CPUs at our disposal than ever before. To get these queries to perform, we not only need to have the right indexes, but also to be able to split the data up into subsets and spread its processing evenly across all these available CPUs. Improvements such as support for ColumnStore indexes are taking things in the right direction, but, unfortunately, deficiencies in the current Optimizer mean that SQL Server is yet to be able to exploit properly all those extra CPUs. Adam’s contention was that the current optimizer uses essentially the same costing model for many of its core operations as it did back in the days of SQL Server 7, based on assumptions that are no longer valid. One example he gave was a "slow disk" bias that may have been valid back in 1997 but certainly is not on modern disk systems. Essentially, the optimizer assesses the relative cost of serial versus parallel plans based on the assumption that there is no IO cost benefit from parallelization, only CPU. It assumes that a single request will saturate the IO channel, and so a query would not run any faster if we parallelized IO because the disk system simply wouldn’t be able to handle the extra pressure. As such, the optimizer often decides that a serial plan is lower cost, often in cases where a parallel plan would improve performance dramatically. It was challenging and thought provoking stuff, as were his techniques for driving parallelism through query logic based on subsets of rows that define the "grain" of the query. I highly recommend you catch the session if you missed it. I’m interested to hear though, when and how often people feel the force of the optimizer’s shortcomings. Barring mistakes, such as stale statistics, how often do you feel the Optimizer fails to find the plan you think it should, and what are the most common causes? Is it fighting to induce it toward parallelism? Combating unexpected plans, arising from table partitioning? Something altogether more prosaic? Cheers, Tony.

    Read the article

  • Consumer Oriented Search In Oracle Endeca Information Discovery - Part 2

    - by Bob Zurek
    As discussed in my last blog posting on this topic, Information Discovery, a core capability of the Oracle Endeca Information Discovery solution enables businesses to search, discover and navigate through a wide variety of big data including structured, unstructured and semi-structured data. With search as a core advanced capabilities of our product it is important to understand some of the key differences and capabilities in the underlying data store of Oracle Endeca Information Discovery and that is our Endeca Server. In the last post on this subject, we talked about Exploratory Search capabilities along with support for cascading relevance. Additional search capabilities in the Endeca Server, which differentiate from simple keyword based "search boxes" in other Information Discovery products also include: The Endeca Server Supports Set Search.  The Endeca Server is organized around set retrieval, which means that it looks at groups of results (all the documents that match a search), as well as the relationship of each individual result to the set. Other approaches only compute the relevance of a document by comparing the document to the search query – not by comparing the document to all the others. For example, a search for “U.S.” in another approach might match to the title of a document and get a high ranking. But what if it were a collection of government documents in which “U.S.” appeared in many titles, making that clue less meaningful? A set analysis would reveal this and be used to adjust relevance accordingly. The Endeca Server Supports Second-Order Relvance. Unlike simple search interfaces in traditional BI tools, which provide limited relevance ranking, such as a list of results based on key word matching, Endeca enables users to determine the most salient terms to divide up the result. Determining this second-order relevance is the key to providing effective guidance. Support for Queries and Filters. Search is the most common query type, but hardly complete, and users need to express a wide range of queries. Oracle Endeca Information Discovery also includes navigation, interactive visualizations, analytics, range filters, geospatial filters, and other query types that are more commonly associated with BI tools. Unlike other approaches, these queries operate across structured, semi-structured and unstructured content stored in the Endeca Server. Furthermore, this set is easily extensible because the core engine allows for pluggable features to be added. Like a search engine, queries are answered with a results list, ranked to put the most likely matches first. Unlike “black box” relevance solutions, which generalize one strategy for everyone, we believe that optimal relevance strategies vary across domains. Therefore, it provides line-of-business owners with a set of relevance modules that let them tune the best results based on their content. The Endeca Server query result sets are summarized, which gives users guidance on how to refine and explore further. Summaries include Guided Navigation® (a form of faceted search), maps, charts, graphs, tag clouds, concept clusters, and clarification dialogs. Users don’t explicitly ask for these summaries; Oracle Endeca Information Discovery analytic applications provide the right ones, based on configurable controls and rules. For example, the analytic application might guide a procurement agent filtering for in-stock parts by visualizing the results on a map and calculating their average fulfillment time. Furthermore, the user can interact with summaries and filters without resorting to writing complex SQL queries. The user can simply just click to add filters. Within Oracle Endeca Information Discovery, all parts of the summaries are clickable and searchable. We are living in a search driven society where business users really seem to enjoy entering information into a search box. We do this everyday as consumers and therefore, we have gotten used to looking for that box. However, the key to getting the right results is to guide that user in a way that provides additional Discovery, beyond what they may have anticipated. This is why these important and advanced features of search inside the Endeca Server have been so important. They have helped to guide our great customers to success. 

    Read the article

  • Rethinking Oracle Optimizer Statistics for P6 Part 2

    - by Brian Diehl
    In the previous post (Part 1), I tried to draw some key insights about the relationship between P6 and Oracle Optimizer Statistics.  The first is that average cardinality has the greatest impact on query optimization and that the particular queries generated by P6 are more likely to use this average during calculations. The second is that these are statistics that are unlikely to change greatly over the life of the application. Ultimately, our goal is to get the best query optimization possible.  Or is it? Stability No application administrator wants to get the call at 9am that their application users cannot get there work done because everything is running slow. This is a possibility with a regularly scheduled nightly collection of statistics. It may not just be slow performance, but a complete loss of service because one or more queries are optimized poorly. Ideally, this should not be the case. The database optimizer should make better decisions with more up-to-date data. Better statistics may give incremental performance benefit. However, this benefit must be balanced against the potential cost of system down time.  It is stability that we ultimately desire and not absolute optimal performance. We do want the benefit from more accurate statistics and better query plans, but not at the risk of an unusable system. As a result, I've developed the following methodology around managing database statistics for the P6 database.  1. No Automatic Re-Gathering - The daily, weekly, or other interval of statistic gathering is unlikely to be beneficial. Quite the opposite. It is more likely to cause problems. 2. Smart Re-Gathering - The time to collect statistics is when things have changed significantly. For a new installation of P6, this is happening more often because the data is growing from a few rows to thousands and more. But for a mature system, the data is not changing significantly from week-to-week. There are times to collect statistics: New releases of the application Changes in the underlying hardware or software versions (ex. new Oracle RDBMS version) When additional user groups are added. The new groups may use the software in significantly different ways. After significant changes in the data. This may be monthly, quarterly or yearly.  3. Always Test - If you take away one thing from this post, it would be to always have a plan to test after changing statistics. In reality, statistics can be collected as often as you desire provided there are tests in place to verify that performance is the same or better. These might be automated tests or simply a manual script of application functions. 4. Have a Way Out - Never change the statistics without a way to return to the previous set. Think of the statistics as one part of the overall application code that also includes the source code--both application and RDBMS. It would be foolish to change to the new code without a way to get back to the previous version. In the final post, I will talk about the actual script I created for P6 PMDB and possible future direction for managing query performance. 

    Read the article

  • sp_send_dbmail attach files stored as varbinary in database

    - by Mindstorm Interactive
    I have a two part question relating to sending query results as attachments using sp_send_dbmail. Problem 1: Only basic .txt files will open. Any other format like .pdf or .jpg are corrupted. Problem 2: When attempting to send multiple attachments, I receive one file with all file names glued together. I'm running SQL Server 2005 and I have a table storing uploaded documents: CREATE TABLE [dbo].[EmailAttachment]( [EmailAttachmentID] [int] IDENTITY(1,1) NOT NULL, [MassEmailID] [int] NULL, -- foreign key [FileData] [varbinary](max) NOT NULL, [FileName] [varchar](100) NOT NULL, [MimeType] [varchar](100) NOT NULL I also have a MassEmail table with standard email stuff. Here is the SQL Send Mail script. For brevity, I've excluded declare statements. while ( (select count(MassEmailID) from MassEmail where status = 20 )>0) begin select @MassEmailID = Min(MassEmailID) from MassEmail where status = 20 select @Subject = [Subject] from MassEmail where MassEmailID = @MassEmailID select @Body = Body from MassEmail where MassEmailID = @MassEmailID set @query = 'set nocount on; select cast(FileData as varchar(max)) from Mydatabase.dbo.EmailAttachment where MassEmailID = '+ CAST(@MassEmailID as varchar(100)) select @filename = '' select @filename = COALESCE(@filename+ ',', '') +FileName from EmailAttachment where MassEmailID = @MassEmailID exec msdb.dbo.sp_send_dbmail @profile_name = 'MASS_EMAIL', @recipients = '[email protected]', @subject = @Subject, @body =@Body, @body_format ='HTML', @query = @query, @query_attachment_filename = @filename, @attach_query_result_as_file = 1, @query_result_separator = '; ', @query_no_truncate = 1, @query_result_header = 0; update MassEmailset status= 30,SendDate = GetDate() where MassEmailID = @MassEmailID end I am able to successfully read files from the database so I know the binary data is not corrupted. .txt files only read when I cast FilaData to varchar. But clearly original headers are lost. It's also worth noting that attachment file sizes are different than the original files. That is most likely due to improper encoding as well. So I'm hoping there's a way to create file headers using the stored mimetype, or some way to include file headers in the binary data? I'm also not confident in the values of the last few parameters, and I know coalesce is not quite right, because it prepends the first file name with a comma. But good documentation is nearly impossible to find. Please help!

    Read the article

  • Why do I get error "1337 The security ID structure is invalid" when using subinacl?

    - by ralbatross
    I have a standard Win 7 account 'popuser' to which I'd like to grant start and stop permissions for the OpenVPNService. I've used the following command successfully on other machines, but for some reason on a new Acer Aspire 5830T that I'm setting up this doesn't do the trick for me: subinacl /service OpenVPNService /grant=popuser=TO I keep getting the following error message: LookupAccountName : OpenVPNService:popuser 1337 The security ID structure is invalid. Current object OpenVPNService will not be processed Elapsed Time: 00 00:00:00 Done: 0, Modified 0, Failed 0, Syntax errors 1 Last Syntax Error:WARNING : /grant=popuser=to : Error when checking arguments - OpenVPNService I've tried adding the machine name to the username and the service name to no avail. I'm running command prompt as an administrator. Anyone have any ideas what's going on?

    Read the article

  • How to get foreignSecurityPrincipal from group. using DirectorySearcher

    - by kain64b
    What I tested with 0 results: string queryForeignSecurityPrincipal = "(&(objectClass=foreignSecurityPrincipal)(memberof:1.2.840.113556.1.4.1941:={0})(uSNChanged>={1})(uSNChanged<={2}))"; sidsForeign = GetUsersSidsByQuery(groupName, string.Format(queryForeignSecurityPrincipal, groupPrincipal.DistinguishedName, 0, 0)); public IList<SecurityIdentifier> GetUsersSidsByQuery(string groupName, string query) { List<SecurityIdentifier> results = new List<SecurityIdentifier>(); try{ using (var context = new PrincipalContext(ContextType.Domain, DomainName, User, Password)) { using (var groupPrincipal = GroupPrincipal.FindByIdentity(context, IdentityType.SamAccountName, groupName)) { DirectoryEntry directoryEntry = (DirectoryEntry)groupPrincipal.GetUnderlyingObject(); do { directoryEntry = directoryEntry.Parent; } while (directoryEntry.SchemaClassName != "domainDNS"); DirectorySearcher searcher = new DirectorySearcher(directoryEntry){ SearchScope=System.DirectoryServices.SearchScope.Subtree, Filter=query, PageSize=10000, SizeLimit = 15000 }; searcher.PropertiesToLoad.Add("objectSid"); searcher.PropertiesToLoad.Add("distinguishedname"); using (SearchResultCollection result = searcher.FindAll()) { foreach (var obj in result) { if (obj != null) { var valueProp = ((SearchResult)obj).Properties["objectSid"]; foreach (var atributeValue in valueProp) { SecurityIdentifier value = (new SecurityIdentifier((byte[])atributeValue, 0)); results.Add(value); } } } } } } } catch (Exception e) { WriteSystemError(e); } return results; } I tested it on usual users with query: "(&(objectClass=user)(memberof:1.2.840.113556.1.4.1941:={0})(uSNChanged>={1})(uSNChanged<={2}))" and it is work, I test with objectClass=* ... nothing help... But If I call groupPrincipal.GetMembers,I get all foreing user account from group. BUT groupPrincipal.GetMembers HAS MEMORY LEAK. Any Idea how to fix my query????

    Read the article

  • PHP 5.3.1 Undefined Symbol: OnUpdateLong error on Apache Startup

    - by docgnome
    I'm running Ubuntu 8.04 on this server. I had PHP 5.2 installed via the package manager. I removed it to install PHP 5.3.1 by hand. I built the packages like so ./configure --prefix=/opt/php --with-mysql --with-curl=/usr/bin --with-apxs2=/usr/bin/apxs2 make make install This installed PHP 5.3.1 in /opt/php/ $ php -v PHP 5.3.1 (cli) (built: Dec 7 2009 10:51:14) Copyright (c) 1997-2009 The PHP Group Zend Engine v2.3.0, Copyright (c) 1998-2009 Zend Technologies However, when I try to start Apache I get this. # /etc/init.d/apache2 restart * Restarting web server apache2 apache2: Syntax error on line 185 of /etc/apache2/apache2.conf: Syntax error on line 1 of /etc/apache2/mods-enabled/php5.load: Cannot load /usr/lib/apache2/modules/libphp5.so into server: /usr/lib/apache2/modules/libphp5.so: undefined symbol: OnUpdateLong [fail] Any ideas what's causing this error? All the references I can see have to do with building php5 packages for php4 or the like. PHP4 has never been installed on this machine.

    Read the article

  • c# WebRequest to connect to wikipedia API

    - by NickJ
    Hey, This may be a pathetically simple problem but I cannot seem to format the post webrequest/response to get data from the wikipedia api. I have posted my code below if anyone can help me see my problem. string pgTitle = txtPageTitle.Text; Uri address = new Uri("http://en.wikipedia.org/w/api.php"); HttpWebRequest request = WebRequest.Create(address) as HttpWebRequest; request.Method = "POST"; request.ContentType = "application/x-www-form-urlencoded"; string action = "query"; string query = pgTitle; StringBuilder data = new StringBuilder(); data.Append("action=" + HttpUtility.UrlEncode(action)); data.Append("&query=" + HttpUtility.UrlEncode(query)); byte[] byteData = UTF8Encoding.UTF8.GetBytes(data.ToString()); request.ContentLength = byteData.Length; using (Stream postStream = request.GetRequestStream()) { postStream.Write(byteData, 0, byteData.Length); } using (HttpWebResponse response = request.GetResponse() as HttpWebResponse) { // Get the response stream StreamReader reader = new StreamReader(response.GetResponseStream()); divWikiData.InnerText = reader.ReadToEnd(); }

    Read the article

  • Ext.data.Store, Javascript Arrays and Ext.grid.ColumnModel

    - by Michael Wales
    I am using Ext.data.Store to call a PHP script which returns a JSON response with some metadata about fields that will be used in a query (unique name, table, field, and user-friendly title). I then loop through each of the Ext.data.Record objects, placing the data I need into an array (this_column), push that array onto the end of another array (columns), and eventually pass this to an Ext.grid.ColumnModel object. The problem I am having is - no matter which query I am testing against (I have a number of them, varying in size and complexity), the columns array always works as expected up to columns[15]. At columns[16], all indexes from that point and previous are filled with the value of columns[15]. This behavior continues until the loop reaches the end of the Ext.data.Store object, when the entire arrays consists of the same value. Here's some code: columns = []; this_column = []; var MetaData = Ext.data.Record.create([ {name: 'id'}, {name: 'table'}, {name: 'field'}, {name: 'title'} ]); // Query the server for metadata for the query we're about to run metaDataStore = new Ext.data.Store({ autoLoad: true, reader: new Ext.data.JsonReader({ totalProperty: 'results', root: 'fields', id: 'id' }, MetaData), proxy: new Ext.data.HttpProxy({ url: 'index.php/' + type + '/' + slug }), listeners: { 'load': function () { metaDataStore.each(function(r) { this_column['id'] = r.data['id']; this_column['header'] = r.data['title']; this_column['sortable'] = true; this_column['dataIndex'] = r.data['table'] + '.' + r.data['field']; // This display valid information, through the entire process console.info(this_column['id'] + ' : ' + this_column['header'] + ' : ' + this_column['sortable'] + ' : ' + this_column['dataIndex']); columns.push(this_column); }); // This goes nuts at columns[15] console.info(columns); gridColModel = new Ext.grid.ColumnModel({ columns: columns });

    Read the article

  • Best practices for using the Entity Framework with WPF DataBinding

    - by Ken Smith
    I'm in the process of building my first real WPF application (i.e., the first intended to be used by someone besides me), and I'm still wrapping my head around the best way to do things in WPF. It's a fairly simple data access application using the still-fairly-new Entity Framework, but I haven't been able to find a lot of guidance online for the best way to use these two technologies (WPF and EF) together. So I thought I'd toss out how I'm approaching it, and see if anyone has any better suggestions. I'm using the Entity Framework with SQL Server 2008. The EF strikes me as both much more complicated than it needs to be, and not yet mature, but Linq-to-SQL is apparently dead, so I might as well use the technology that MS seems to be focusing on. This is a simple application, so I haven't (yet) seen fit to build a separate data layer around it. When I want to get at data, I use fairly simple Linq-to-Entity queries, usually straight from my code-behind, e.g.: var families = from family in entities.Family.Include("Person") orderby family.PrimaryLastName, family.Tag select family; Linq-to-Entity queries return an IOrderedQueryable result, which doesn't automatically reflect changes in the underlying data, e.g., if I add a new record via code to the entity data model, the existence of this new record is not automatically reflected in the various controls referencing the Linq query. Consequently, I'm throwing the results of these queries into an ObservableCollection, to capture underlying data changes: familyOC = new ObservableCollection<Family>(families.ToList()); I then map the ObservableCollection to a CollectionViewSource, so that I can get filtering, sorting, etc., without having to return to the database. familyCVS.Source = familyOC; familyCVS.View.Filter = new Predicate<object>(ApplyFamilyFilter); familyCVS.View.SortDescriptions.Add(new System.ComponentModel.SortDescription("PrimaryLastName", System.ComponentModel.ListSortDirection.Ascending)); familyCVS.View.SortDescriptions.Add(new System.ComponentModel.SortDescription("Tag", System.ComponentModel.ListSortDirection.Ascending)); I then bind the various controls and what-not to that CollectionViewSource: <ListBox DockPanel.Dock="Bottom" Margin="5,5,5,5" Name="familyList" ItemsSource="{Binding Source={StaticResource familyCVS}, Path=., Mode=TwoWay}" IsSynchronizedWithCurrentItem="True" ItemTemplate="{StaticResource familyTemplate}" SelectionChanged="familyList_SelectionChanged" /> When I need to add or delete records/objects, I manually do so from both the entity data model, and the ObservableCollection: private void DeletePerson(Person person) { entities.DeleteObject(person); entities.SaveChanges(); personOC.Remove(person); } I'm generally using StackPanel and DockPanel controls to position elements. Sometimes I'll use a Grid, but it seems hard to maintain: if you want to add a new row to the top of your grid, you have to touch every control directly hosted by the grid to tell it to use a new line. Uggh. (Microsoft has never really seemed to get the DRY concept.) I almost never use the VS WPF designer to add, modify or position controls. The WPF designer that comes with VS is sort of vaguely helpful to see what your form is going to look like, but even then, well, not really, especially if you're using data templates that aren't binding to data that's available at design time. If I need to edit my XAML, I take it like a man and do it manually. Most of my real code is in C# rather than XAML. As I've mentioned elsewhere, entirely aside from the fact that I'm not yet used to "thinking" in it, XAML strikes me as a clunky, ugly language, that also happens to come with poor designer and intellisense support, and that can't be debugged. Uggh. Consequently, whenever I can see clearly how to do something in C# code-behind that I can't easily see how to do in XAML, I do it in C#, with no apologies. There's been plenty written about how it's a good practice to almost never use code-behind in WPF page (say, for event-handling), but so far at least, that makes no sense to me whatsoever. Why should I do something in an ugly, clunky language with god-awful syntax, an astonishingly bad editor, and virtually no type safety, when I can use a nice, clean language like C# that has a world-class editor, near-perfect intellisense, and unparalleled type safety? So that's where I'm at. Any suggestions? Am I missing any big parts of this? Anything that I should really think about doing differently?

    Read the article

  • Problem with Remember Me Service in Spring Security

    - by Gearóid
    Hi, I'm trying to implement a "remember me" functionality in my website using Spring. The cookie and entry in the persistent_logins table are getting created correctly. Additionally, I can see that the correct user is being restored as the username is displayed at the top of the page. However, once I try to access any information for this user when they return after they were "remembered", I get a NullPointerException. It looks as though the user isn't being set in the session again. My applicationContext-security.xml contains the following: <remember-me data-source-ref="dataSource" user-service-ref="userService"/> ... <authentication-provider user-service-ref="userService" /> <jdbc-user-service id="userService" data-source-ref="dataSource" role-prefix="ROLE_" users-by-username-query="select email as username, password, 1 as ENABLED from user where email=?" authorities-by-username-query="select user.id as id, upper(role.name) as authority from user, role, users_roles where users_roles.user_fk=id and users_roles.role_fk=role.name and user.email=?"/> I thought it may have had something to do with users-by-username query but surely login wouldn't work correctly if this query was incorrect? Any help on this would be greatly appreciated. Thanks, gearoid.

    Read the article

  • ASP.Net Entity Framework, objectcontext error

    - by Chris Klepeis
    I'm building a 4 layered ASP.Net web application. The layers are: Data Layer Entity Layer Business Layer UI Layer The entity layer has my data model classes and is built from my entity data model (edmx file) in the datalayer using T4 templates (POCO). The entity layer is referenced in all other layers. My data layer has a class called SourceKeyRepository which has a function like so: public IEnumerable<SourceKey> Get(SourceKey sk) { using (dmc = new DataModelContainer()) { var query = from SourceKey in dmc.SourceKeys select SourceKey; if (sk.sourceKey1 != null) { query = from SourceKey in query where SourceKey.sourceKey1 == sk.sourceKey1 select SourceKey; } return query; } } Lazy loading is disabled since I do not want my queries to run in other layers of this application. I'm receiving the following error when attempting to access the information in the UI layer: The ObjectContext instance has been disposed and can no longer be used for operations that require a connection. I'm sure this is because my DataModelContainer "dmc" was disposed. How can I return this IEnumerable object from my data layer so that it does not rely on the ObjectContext, but solely on the DataModel? Is there a way to limit lazy loading to only occur in the data layer?

    Read the article

  • Why is OracleDataAdapter.Fill() Very Slow?

    - by John Gietzen
    I am using a pretty complex query to retrieve some data out of one of our billing databases. I'm running in to an issue where the query seems to complete fairly quickly when executed with SQL Developer, but does not seem to ever finish when using the OracleDataAdapter.Fill() method. I'm only trying to read about 1000 rows, and the query completes in SQL Developer in about 20 seconds. What could be causing such drastic differences in performance? I have tons of other queries that run quickly using the same function. Here is the code I'm using to execute the query: using Oracle.DataAccess.Client; ... public DataTable ExecuteExternalQuery(string connectionString, string providerName, string queryText) { DbConnection connection = null; DbCommand selectCommand = null; DbDataAdapter adapter = null; switch (providerName) { case "System.Data.OracleClient": case "Oracle.DataAccess.Client": connection = new OracleConnection(connectionString); selectCommand = connection.CreateCommand(); adapter = new OracleDataAdapter((OracleCommand)selectCommand); break; ... } DataTable table = null; try { connection.Open(); selectCommand.CommandText = queryText; selectCommand.CommandTimeout = 300000; selectCommand.CommandType = CommandType.Text; table = new DataTable("result"); table.Locale = CultureInfo.CurrentCulture; adapter.Fill(table); } finally { adapter.Dispose(); if (connection.State != ConnectionState.Closed) { connection.Close(); } } return table; } And here is the general outline of the SQL I'm using: with trouble_calls as ( select work_order_number, account_number, date_entered from work_orders where date_entered >= sysdate - (15 + 31) -- Use the index to limit the number of rows scanned and wo_status not in ('Cancelled') and wo_type = 'Trouble Call' ) select account_number, work_order_number, date_entered from trouble_calls wo where wo.icoms_date >= sysdate - 15 and ( select count(*) from trouble_calls repeat where wo.account_number = repeat.account_number and wo.work_order_number <> repeat.work_order_number and wo.date_entered - repeat.date_entered between 0 and 30 ) >= 1

    Read the article

  • Improve XPath efficiency for repeated, parameterized queries

    - by Chris Allan
    Hi, I am repeatedly performing the following XPath query (though parameterized by 'keywordText') around 40,000 times: String query = SystemGlobal.YAHOO_KEYWORDSSUBNODE + "/" + SystemGlobal.YAHOO_KEYWORDNODE + "[" + SystemGlobal.YAHOO_ATTRKEYPHRASE + "='" + keywordText + "']"; CachedXPathAPI cachedXPathAPI = new CachedXPathAPI(); NodeIterator nl = cachedXPathAPI.selectNodeIterator(doc.getElementsByTagName(SystemGlobal.YAHOO_KEYWORDSROOT).item(0), query); Node n; if ((n = nl.nextNode()) != null) { keyword.setKeywordId(Long.parseLong(cachedXPathAPI.selectSingleNode(n, SystemGlobal.YAHOO_ATTRKEYID).getTextContent())); keyword.setKeyPhrase(cachedXPathAPI.selectSingleNode(n, SystemGlobal.YAHOO_ATTRKEYPHRASE).getTextContent()); keyword.setStatus(mapStatus(cachedXPathAPI.selectSingleNode(n, SystemGlobal.YAHOO_ATTRSTATUS).getTextContent())); keyword.setCampaignId(Long.parseLong(cachedXPathAPI.selectSingleNode(n, "../../" + SystemGlobal.YAHOO_ATTRCAMPAIGNID).getTextContent())); keyword.setAdGroupId(Long.parseLong(cachedXPathAPI.selectSingleNode(n, "../" + SystemGlobal.YAHOO_ATTRADGROUPID).getTextContent())); On the first run of the script, all 40,000 runs of this piece of code will have nl.nextNode() == null, and everything runs quite quickly. However, on the following runs, when nl.nextNode() != null, then things slow down a lot - this takes around an additional 40min to run (whereas the first run takes maybe 1 minute). Oh, and the doc is constructed like so: InputSource in = new InputSource(new FileInputStream(filename)); DocumentBuilderFactory dfactory = DocumentBuilderFactory.newInstance(); dfactory.setNamespaceAware(true); doc = dfactory.newDocumentBuilder().parse(in); I tried including the following lines reportEvaluator = new XPathEvaluatorImpl(reportDoc); reportResolver = reportEvaluator.createNSResolver(reportDoc); and rather creating a NodeIterator, instead creating an XPathResult: XPathResult result = (XPathResult)reportEvaluator.evaluate(query, doc.getElementsByTagName(SystemGlobal.YAHOO_KEYWORDSROOT).item(0), reportResolver, XPathResult.UNORDERED_NODE_ITERATOR_TYPE, null); however this ran even slower Is there a way in which I can speed up the running of this script? I have seen references to precompiled queries, though I haven't seen many actual details. Also, as seen in the code, I am using CachedXPathAPI, though the benefit for this case is not so great. Any help is much appreciated! Chris Allan

    Read the article

  • Using DateTime in a SqlParameter for Stored Procedure, format error

    - by Matt
    I'm trying to call a stored procedure (on a SQL 2005 server) from C#, .NET 2.0 using DateTime as a value to a SqlParameter. The SQL type in the stored procedure is 'datetime'. Executing the sproc from SQL Management Studio works fine. But everytime I call it from C# I get an error about the date format. When I run SQL Profiler to watch the calls, I then copy paste the exec call to see whats going on. These are my observations and notes about what I've attempted: 1) If I pass the DateTime in directly as a DateTime or converted to SqlDateTime, the field is surrounding by a PAIR of single quotes, such as @Date_Of_Birth=N''1/8/2009 8:06:17 PM'' 2) If I pass the DateTime in as a string, I only get the single quotes 3) Using SqlDateTime.ToSqlString() does not result in a UTC formatted datetime string (even after converting to universal time) 4) Using DateTime.ToString() does not result in a UTC formatted datetime string. 5) Manually setting the DbType for the SqlParameter to DateTime does not change the above observations. So, my questions then, is how on earth do I get C# to pass the properly formatted time in the SqlParameter? Surely this is a common use case, why is it so difficult to get working? I can't seem to convert DateTime to a string that is SQL compatable (e.g. '2009-01-08T08:22:45') EDIT RE: BFree, the code to actually execute the sproc is as follows: using (SqlCommand sprocCommand = new SqlCommand(sprocName)) { sprocCommand.Connection = transaction.Connection; sprocCommand.Transaction = transaction; sprocCommand.CommandType = System.Data.CommandType.StoredProcedure; sprocCommand.Parameters.AddRange(parameters.ToArray()); sprocCommand.ExecuteNonQuery(); } To go into more detail about what I have tried: parameters.Add(new SqlParameter("@Date_Of_Birth", DOB)); parameters.Add(new SqlParameter("@Date_Of_Birth", DOB.ToUniversalTime())); parameters.Add(new SqlParameter("@Date_Of_Birth", DOB.ToUniversalTime().ToString())); SqlParameter param = new SqlParameter("@Date_Of_Birth", System.Data.SqlDbType.DateTime); param.Value = DOB.ToUniversalTime(); parameters.Add(param); SqlParameter param = new SqlParameter("@Date_Of_Birth", SqlDbType.DateTime); param.Value = new SqlDateTime(DOB.ToUniversalTime()); parameters.Add(param); parameters.Add(new SqlParameter("@Date_Of_Birth", new SqlDateTime(DOB.ToUniversalTime()).ToSqlString())); Additional EDIT The one I thought most likely to work: SqlParameter param = new SqlParameter("@Date_Of_Birth", System.Data.SqlDbType.DateTime); param.Value = DOB; Results in this value in the exec call as seen in the SQL Profiler @Date_Of_Birth=''2009-01-08 15:08:21:813'' If I modify this to be @Date_Of_Birth='2009-01-08T15:08:21' It works, but it won't parse with pair of single quotes, and it wont convert to a datetime correctly with the space between the date and time and with the milliseconds on the end. Update and Success First and foremost, thank you everyone for the answers. I post this for the sake of completeness and accuracy on SO - because I certainly do not do it for my pride... I had copy/pasted the code above after the request from below. I trimmed things here and there to be concise. Turns out my problem was in the code I left out, which I'm sure any one of you would have spotted in an instant. I had wrapped my sproc calls inside a transaction. Turns out that I was simply not doing transaction.Commit()!!!!! I'm ashamed to say it, but there you have it. I still don't know what's going on with the syntax I get back from the profiler. A coworker watched with his own instance of the profiler from his computer, and it returned proper syntax. Watching the very SAME executions from my profiler showed the incorrect syntax. It acted as a red-herring, making me believe there was a query syntax problem instead of the much more simple and true answer, which was that I need to commit the transaction! I marked an answer below as correct, and threw in some up-votes on others because they did, after all, answer the question, even if they didn't fix my specific (brain lapse) issue. Thanks again for the help.

    Read the article

  • php-fpm start error

    - by Sujay
    I am using php-fpm. I recently recompiled php for including imap functions. But on php-fpm start it gives the following error: Starting php_fpm Error in argument 1, char 1: no argument for option - Usage: php-cgi [-q] [-h] [-s] [-v] [-i] [-f ] php-cgi [args...] -a Run interactively -C Do not chdir to the script's directory -c | Look for php.ini file in this directory -n No php.ini file will be used -d foo[=bar] Define INI entry foo with value 'bar' -e Generate extended information for debugger/profiler -f Parse . Implies `-q' -h This help -i PHP information -l Syntax check only (lint) -m Show compiled in modules -q Quiet-mode. Suppress HTTP Header output. -s Display colour syntax highlighted source. -v Version number -w Display source with stripped comments and whitespace. -z Load Zend extension ................................... failed What could be the problem? Is it in php-fpm.conf or php.ini.

    Read the article

  • JavaScript variable to ColdFusion variable

    - by Alexander
    I have a tricky one. By means of a <cfoutput query="…"> I list some records in the page from a SQL Server database. By the end of each line viewing I try to add this in to a record in a MySQL database. As you see is simple, because I can use the exact variables from the output query in to my new INSERT INTO statement. BUT: the rsPick.name comes from a database with a different character set and the only way to get it right into my new database is to read it from the web page and not from the value came in the output query. So I read this value with that little JavaScript I made and put it in the myValue variable and then I want ColdFusion to read that variable in order to place it in my SQL statement. <cfoutput query="rsPick"> <tr> <td>#rsPick.ABBREVIATION#</td> <td id="square"> #rsPick.name# </td> <td>#rsPick.Composition#</td> <td> Transaction done... <script type="text/javascript"> var myvalue = document.getElementById("square").innerHTML </script> </td> <cfquery datasource="#Request.Order#"> INSERT INTO products (iniid, abbreviation, clsid, cllid, dfsid, dflid, szsid, szlid, gross, retail, netvaluebc, composition, name) VALUES ( #rsPick.ID#, '#rsPick.ABBREVIATION#', #rsPick.CLSID#, #rsPick.CLLID#, #rsPick.DFSID#, #rsPick.DFLID#, #rsPick.SZSID#, #rsPick.SZLID#, #rsPick.GROSSPRICE#, #rsPick.RETAILPRICE#, #rsPick.NETVALUEBC#, '#rsPick.COMPOSITION#','#MYVALUE#' ) </cfquery> </tr> </cfoutput>

    Read the article

  • Apache </Location> Errors

    - by Eddie
    Hi there! I am having real trouble with this installation - Basically this is the erro that I am getting: apache2: Syntax error on line 234 of /etc/apache2/apache2.conf: Syntax error on line 10 of /etc/apache2/conf.d/amberdms-bs.conf: Expected </Location\xc2\xa0/billing_system> but saw </Location> and this is the code that is being used in that modules file: #  # Amberdms Billing System is an open source accounting, service billing and time keeping web application.  #  Alias /billing_system /usr/share/amberdms/billing_system  <Location /billing_system>  Order deny,allow  Allow from all  AllowOverride all    Please help me! I need this software to be installed, but this error has stumped me.

    Read the article

  • How to implement YQL paging?

    - by Erik Vold
    I've read the YQL guide, and I keep reviewing http://developer.yahoo.com/yql/guide/yql-o...entables-paging and I have been looking at a few examples, but I'm still left pretty confused how YQL paging works. The problem that I am trying to tackle is creating a YQL open data table for the Mozilla labs Jetpack Gallery's jetpacks pages http://jetpackgallery.mozillalabs.com/jetpacks You flip through the pages of jetpacks with the ?page query variable and there is an order_by query variable. You can only see 10 results per page. Questions: List item Should I use or ? How do I specify the query parameter that indicates the page? in this case it is the 'page' query parameter. I am assuming I should use: <urls><url>http://jetpackgallery.mozillalabs.com/jetpacks</url></urls> is this correct? In the execute element, I will need to extract the details for each jetpack on the page? if so how would I organize that for the response.object? Can anyone provide some help? or perhaps point to a data table that I can look at as a reference? or better documentation on how paging works?

    Read the article

< Previous Page | 440 441 442 443 444 445 446 447 448 449 450 451  | Next Page >