Search Results

Search found 10556 results on 423 pages for 'practical approach'.

Page 55/423 | < Previous Page | 51 52 53 54 55 56 57 58 59 60 61 62  | Next Page >

  • Restart a single apache vhost

    - by snowflake
    I've got an Apache (2) server with several virtual hosts. I currently have mysql db locks problems on one virtual host. A common and practical way to easy release those locks and unlock the site (while the dev team refactor its application to avoid the locks), is to simply restart the apache server. I'm wondering if there is a way to restart the single vhost that is in trouble. Thank you for any comments

    Read the article

  • Squid on windows loadbalancing only to one server

    - by Martin L.
    After thousands of googles and trying days i cant get the load balancer/failover in squid on windows to work. Iam using squid 2.7. My webservers are 2 single NIC lighttpd and one dual nic lighttpd. server1 in this example is running squid on port 80 and lighttpd on port 8080 (just to test) Requirements: All 3 webservers running lighttpd should be balanced two option for load balancing: Best would be if server1 is busy server2 takes over, if server2 is busy server3 takes over, etc.. Round robin style evenly distributed load. Eg server1 takes first call, server2 second etc.. All requests should be treated the same way (no url rewriting or so on) Sent host headers have to be redirected to every server as http host header, speaking of "server1", "server1.company.internal" and "10.211.1.1". My approach: acl all src all acl manager proto cache_object http_port 80 accel defaultsite=server1.company.internal vhost #reverse proxy entries cache_peer 10.211.2.1 parent 8080 0 no-query originserver round-robin login=PASS name=server1_nic1 cache_peer 10.211.1.2 parent 80 0 no-query originserver round-robin login=PASS name=server2_nic1 cache_peer 10.211.2.3 parent 8080 0 no-query originserver round-robin login=PASS name=server3_nic1 cache_peer 10.211.2.4 parent 8080 0 no-query originserver round-robin login=PASS name=server3_nic2 #decl of names of squid host acl registered_name_hostdomain dstdomain server1.company.internal acl registered_name_host dstdomain server1 #ip of squid host acl registered_name_ip dstdomain 10.211.2.1 # access: redirects the correct squid hostname http_access allow registered_name_hostdomain http_access allow registered_name_host http_access allow registered_name_ip http_access deny all cache_peer_access server1_nic1 allow registered_name_hostdomain cache_peer_access server1_nic1 allow registered_name_host cache_peer_access server1_nic1 allow registered_name_ip cache_peer_access server2_nic1 allow registered_name_hostdomain cache_peer_access server2_nic1 allow registered_name_host cache_peer_access server2_nic1 allow registered_name_ip cache_peer_access server3_nic1 allow registered_name_hostdomain cache_peer_access server3_nic1 allow registered_name_host cache_peer_access server3_nic1 allow registered_name_ip cache_peer_access server3_nic2 allow registered_name_hostdomain cache_peer_access server3_nic2 allow registered_name_host cache_peer_access server3_nic2 allow registered_name_ip cache_peer_access server1_nic1 deny all cache_peer_access server2_nic1 deny all cache_peer_access server3_nic1 deny all cache_peer_access server3_nic2 deny all never_direct allow all Problems: Load balancer does not load balance other than to first server. Only if the first server is killed in any way the second will take over. I have seen the others working at some point, but definitely not as the intended load balancing described above. If the cache_peer_access is not defined sometimes the wrong hostname is sent to the backend webserver and this always depends on the defaultsite= parameter. Probably because the host header on the request to squid is not set and its replaced by defaultsite. Leaving out defaultsite didnt solve the problem. The only workaround i found for this is the current approach with cache_peer_access. Questions: Does the cache_peer_access influence the round-robin? Is there a better workaround to pass the host header to the backed webservers? Which parameters do increase the speed of load balancing or does anyone have a better approach? -Martin

    Read the article

  • Subdomains, folders, internationalization, and hosting solutions

    - by justinbach
    I'm a web developer and I recently landed a gig to develop the US / international version of a site for a company that's big in Europe but hasn't done much expansion into the US yet. They've got an existing site at company.com, which should remain visible to European customers after the new site goes up, and an existing (not great) site at company.us, which I'm going to be redeveloping (the .us site will be taken down when my version goes up--keep reading for details). My solution needs to take into account the fact that there are going to be new, localized versions of the site in the fairly near future, so the framework I'm writing needs to be able to handle localizations fairly easily (dynamically load language packs, etc). The tricky thing is the European branch of the company manages the .com site hosting (IIS-based) and the DNS, while I'll be managing the US hosting (and future localizations), which will likely be apache-based. I've never been a big fan of the ".us" TLD--I think most US users are accustomed to visiting the .com--so the thought is that the European branch will detect the IP of inbound traffic and redirect all US-based addresses to us.example.com (or whatever the appropriate localized subdomain might be), which would point to the IP address of my host. I'd then serve the appropriate locale-specific content by pulling the subdomain from the $_SERVER superglobal (assuming PHP). I couldn't find any examples of international organizations that take a subdomain-based approach for localization, but I'm not sure I have any other options as a result of the unique hosting structure here (in that there's not a unified hosting solution for the European and US sites). In my experience, the US version of an international site would live at domain.com/us, not at us.domain.com, and I'd imagine that this has to do with SEO (subdomains are treated as separate sites, so improved rankings for the US site wouldn't help the Canadian version if subdomains are used to differentiate between them). My question is: is there a better approach to solving this problem than the one I'm taking? Ideally, I'd like to use a folder-based approach (see adidas.com as an example of what I'm talking about), but I'm not sure that's a possibility given that the US site (and other localizations) will not be hosted on the same server as the rest of the .com. Can you, in IIS, map a folder (e.g. domain.com/us) to a different IP address? What would you recommend? Thanks for your consideration.

    Read the article

  • How to I create a table of contents in a Word document that has a mind of it's own?

    - by Howiecamp
    I'm embarrassed to admit that I'm struggling to get a table of contents going in a Word doc that's already been created. I know enough to understand that the TOC is based on the type of the header/style and indentation. My approach so far has been to auto-generate the TOC and then try (unsuccessfully) to fix the problems; perhaps this isn't the best approach in this situation. What's happening is that the TOC is missing half my sections and for others it's adding way too much detail. Again my sense is I have to "fix" individual section headings but I haven't been successful so far.

    Read the article

  • How do you disable an upstart service in ubuntu 10.10?

    - by Doug
    In 10.10 upstart is being used instead of sysvinit. It's possible to remove annoying upstart services which you do not want by removing the appropriate file in /etc/init/blah.conf However, this seems a heavy handed approach. How do you correctly configure upstart to be able to selectively turn these services on and off via the command line? As a practical example, the answers listed here to turn gdm off using rcconf no longer work: How do I prevent GDM from running at boot on Ubuntu?

    Read the article

  • Nginx Tornado Combination Causing 502 Bad Gateway Errors

    - by PlaidFan
    We are facing a problem with inconsistent 502 errors and tracking down the reasons has been a very frustrating exercise. We can reproduce the problem by sending several simultaneous requests quickly. The problem is that several is only in the range of 10 to 20 within a 5 seconds (not a typo). So clearly this type of load should be handled easily. We really like the Nginx + Tornado approach but are considering going to a more traditional (e.g. threading) approach because this problem has been very difficult to solve. I was wondering if you a) know how to fix this issue and b) how we can tracked down the culprit(s). The log files simply identify there being a connection refused. We have the same problem as this post: How do I debug a HTTP 502 error? But there is no answer provided on how to solve the problem so I'm hoping you can help because this may be a common issue with this type of setup. Thanks in advance, Paul

    Read the article

  • How do you pick what server setup you need?

    - by ed209
    I recently started receiving pubsub data feed from etsy. It averages around 250 notifications per minute. But obviously, when the USA wakes up that spikes quite heavily. I want to be able to deal with those spikes (about 3 per day) but the rest of day is fine. What's the best method of getting the right server configuration. My current approach is to keep upgrading until the server stops dying... next leap is: Processor: AMD Phenom II X6-1055T HEXA Core RAM: 4GB DDR2 SDRAM HD1: SATA Drive (7,200 rpm) (+500 GB 7200 RPM SATA hard drive) HD2: SATA Backup Drive (+500 GB SATA (7,200 rpm)) OS: Linux OS (+CentOS 5 64-bit) Bandwidth: 6000GB Monthly Transfer (3000 in + 3000 out) (+100M uplink port) What's the best approach to working out what sort of server setup you need?

    Read the article

  • E-ink screens for desktop PC?

    - by Legooolas
    Are there PC monitors which are e-ink displays? E-ink is so much easier to read documents on that I'd rather like a secondary monitor just for documents, but I can't find any monitors available currently. Perhaps just an e-reader with some kind of remote control from a PC, so that it can be put in a cradle and display a document, but it would also need to have a fairly large screen to make it practical (perhaps something like A4 (EU) or Letter (US) sized?)

    Read the article

  • Limit NFS block size from server side?

    - by paulw1128
    Is it possible to enforce a maximum rsize/wsize in nfsd? I'm having issues related to IP fragmentation (yes, I'm stuck with NFS-over-UDP, contrary to the warnings in the manpage), and have no practical access to the client mount command (buried in one of many TFTP boot images). http://nfs.sourceforge.net/nfs-howto/ar01s05.html lists a kernel source parameter limiting the maximum block size, but I'm not gong to get away with recompiling the nfsd kernel module so that's not really an option either :-(

    Read the article

  • Is there any way to do "mail server parking"?

    - by percyboy
    I am managing a mail server, which will be temporarily closed for three or four days due to the data center maintenance. I want to find a solution to (completely or partly) solve the lost mails during this unavailable period. Because the data volume is huge, it is very hard to migrate it to other data center. One approach I think out is to setup a temporary mail server in other data center, and when new mail received, the mail server automatically sends a return mail to tell the sender "We are temporarily closed for three or four days. Please send the mail later or contact in other means." I am wondering is this approach possible with existed mail server ? Or something better available ? (free solution is preferred for it is only for temporary)

    Read the article

  • Using Dropbox API instead of a FTP server.

    - by Somebody still uses you MS-DOS
    This is a small aplication scenario. Usually, when you have to do some backups of source code/database on your server, you use a second ftp server, a cronjob to tar.gz your db dumps and source files, and send this file to your ftp server from your application server. Dropbox created an API to use it's infrastrucutre. Since they provide 2gb for free accounts, I thought about being able to upload to it instead of a ftp server. So, if you do some freelance work, you can create a free account for each client and use this approach, maybe encrypting the files you send. You even gain a revision for each sent file, like a revison control system, for free, from the last 30 days. What do you think of this approach? Is it possible? And, more importantly: what are the security risks involved? (That's why I'm asking this on serverfault, since this POV from sysadmins will be more accurate). Thanks!

    Read the article

  • Benchmark virtual machines?

    - by evan
    I'm looking for a good way to benchmark the performance of Ubuntu machines (preferably from the command line - only care about harddrive speed, memory, and cpu - not graphics). Are there any programs that could also be used on Mac or Windows so I could compare the results against an Apple or PC Desktop? Ultimately I'd like to use these benchmarks to compare different virtual machine configurations (speeds on different hosts and different hardware to get practical idea of the differences between different setups a rigs). Thanks!

    Read the article

  • How to configure LAMP server for iOS social/chat app?

    - by andufo
    I'm on the last developing phase of a social networking app for iOS that has a chat module. Right now I'm trying to figure out the best way to achieve these features: Send message instantly to another user. If other user is online, delivery should be instantly. If user reads the message, the remitent should be notified of that action. If a user visits my profile, I should be notified instantly. What would be, in your opinion, the best approach to achieve that experience? The server is CentOS 5.6. I've previously reviewed XMPP, sockets, but I'm still unsure on what the best approach is. Any opinions and resources will be much appreciated.

    Read the article

  • Using Dropbox API instead of a FTP server for backing up DB/Source in your application.

    - by Somebody still uses you MS-DOS
    This is a small aplication scenario. Usually, when you have to do some backups of source code/database on your server, you use a second ftp server, a cronjob to tar.gz your db dumps and source files, and send this file to your ftp server from your application server. Dropbox created an API to use it's infrastrucutre. Since they provide 2gb for free accounts, I thought about being able to upload to it instead of a ftp server. So, if you do some freelance work, you can create a free account for each client and use this approach, maybe encrypting the files you send. You even gain a revision for each sent file, like a revison control system, for free, from the last 30 days. What do you think of this approach? Is it possible? And, more importantly: what are the security risks involved? (That's why I'm asking this on serverfault, since this POV from sysadmins will be more accurate). Thanks!

    Read the article

  • Examples of temporal database designs? [closed]

    - by miku
    I'm researching various database design for historical record keeping - because I would like to implement a prototypical web application that (excessively) tracks changes made by users and let them undo things, see revisions, etc. I'd love use mercurial or git as backend (with files as records) - since they already implement the kind of append-only changes I imagine. I tried git and dulwich (python git API) - and it went ok - but I was concerned about performance; Bi-temporal database design lets you store a row along with time periods, when this record was valid. This sure sound more performant than to operate on files on disk (as in 1.) - but I had little luck finding real-world examples (e.g. from open source projects), that use this kind of design and are approachable enough to learn from them. Revisions à la MediaWiki revisions or an extra table for versions, as in Redmine. The problem here is, that DELETE would take the whole history with it. I looked at NoSQL solutions, too. With a document oriented approach, it would be simple to just store the whole history of an entity within the document itself - which would reduce design plus implementation time in contrast to a RDBMS approach. However, in this case I'm a bit concerned about ACID-properties, which would be important in the application. I'd like ask about experiences about real-world and pragmatic designs for temporal data.

    Read the article

  • Scoring/analysis of Subjective testing for skills assessment

    - by ChrisBint
    I am lucky in the sense that I have been given the opportunity to be a 'Technical Troubleshooter' for our offshore development team. While I am confident and capable of dealing with most issues, I have come across something that I am not. Based on initial discussions with various team members both on and offshore, a requirement for a 'repeatable, consistent' skills assessment has been identified. In my opinion, the best way to achieve this would be a combination of objective and subjective tests. The former normally being an initial online skills assessment on various subjects, for example General C#, WCF and MVC. The latter being a technical test where the candidate would need to solve various problems and (hopefully) explain the thought processes involved with the solution whilst doing so. Obviously, the first method is consistent, repeatable and extremely accurate. The second is always going to be subjective and based on the approach, the solution (or possibly not) and other factors. The 'scoring' of this is also going to be down to the experience and skills of the assessor and this is where my problem lies; The person that is expected to be the assessor initially (me) has no experience. The people that will ultimately continue this process for other people will never remain the same due to project constraints and internal reasons, this changes the baseline for comparison. I am not aware of any suitable system that can be classed as consistent and repeatable for subjective tests with the 2 factors above, let alone if those did not exist. So anyway, I have to present a plan that will ultimately generate a skills/gap analysis and it is unlikely that I will be able to use an objective method (budget constraints most likely reason). The only option left is the subjective methods and the issues above. Does anyone have any suggestions for an approach that may tick all the boxes?

    Read the article

  • Handling EJB/JPA exceptions in a “beautiful” way

    - by Rodrigues, Raphael
    In order to handle JPA exceptions, there are some ways already detailed in lots of blogs. Here, I intend to show one of them, which I consider kind of “beauty”. My use case has a unique constraint, when the User try to create a duplicate value in database. The JPA throws a java.sql.SQLIntegrityConstraintViolationException, and I have to catch it and replace the message. In fact, ADF Business Components framework already has a beautiful solution for this very well documented here . However, for EJB/JPA there's no similar approach. In my case, what I had to do was: 1. Create a custom Error Handler Class in DataBindings file; a. Here is how you accomplish it. 2. Override the reportException method and check if the type of exception exists on property file a. If yes, I change the message and rethrows the exception b. If not, go on the execution The main goal of this approach is whether a new or unhandled Exception was raised, the job is, only create a single entry in bundle property file. Here are pictures step by step: 1. CustomExceptionHandler.java 2. Databindings.cpx 3. Bundle file 4. jspx: 5. Stacktrace: Give your opinion, what did you think about that?

    Read the article

  • Looking for good Regex book

    - by Cyberherbalist
    I've been trying to get a good grounding with Regular Expressions, and am looking for a single book to do so. I've been going through Amazon.com's listings on this subject, and I've identified a few possibilities, but am unsure which would be best for a C# developer who can write very simple Regexs, but wants to learn more. On a scale of 0-9 where 0 is knowing how to spell "Regex" but nothing else, and 9 where I could write a book on the subject out of my own head, I would place myself at 2. Which of the following would be your choice: Mastering Regular Expressions by Jeffrey E F Friedl Regular Expressions Cookbook by Jan Goyvaerts and Steven Levithan Sams Teach Yourself Regular Expressions in 10 Minutes by Ben Forta Beginning Regular Expressions (Programmer to Programmer) by Andrew Watt Regular Expression Recipes for Windows Developers: A Problem-Solution Approach by Nathan A. Good Regular Expression Recipes: A Problem-Solution Approach by Nathan A. Good Now, according to Amazon, "Regular Expressions Cookbook" (REC) above is rated the highest according to user ratings, but only based on 20 reviews. The first one, "Mastering Regular Expressions" (MRE) is rated second based on 140 reviews. This alone suggests that MRE might be by far the best one. But is it best for a relative beginner? Would I perhaps be better getting "Beginning Regular Expressions" (BRE) instead, to start with? Please help me resolve my confusion!

    Read the article

  • Manage a flexible and elastic Data Center with Oracle VM Manager (By Tarry Singh - PACKT Publishing)

    - by frederic.michiara
    For the ones looking at an easy reading and first good approach to Oracle VM Manager and VM Servers, I would recommend reading the following book even so it was written for 2.1.2 whereas we can use now Oracle VM 2.2 : Oracle VM Manager 2.1.2 Manage a Flexible and Elastic Data Center with Oracle VM Manager Learn quickly to install Oracle VM Manager and Oracle VM Servers Learn to manage your Virtual Data Center using Oracle VM Manager Import VMs from the Web, template, repositories, and other VM formats such as VMware Learn powerful Xen Hypervisor utilities such as xm, xentop, and virsh A practical hands-on book with step-by-step instructions Oracle VM experts might be frustrated, but to me it's not aim to Oracle VM experts, but to the ones who needs an introduction to the subject with a good coverage of all what you need to know. This book is available on https://www.packtpub.com/oracle-vm-manager-2-1-2/book Need to find out about Table of contents : https://www.packtpub.com/article/oracle-vm-manager-2-1-2-table-of-contents Discover a sample chapter : https://www.packtpub.com/sites/default/files/sample_chapters/7122-oracle-virtualization-sample-chapter-4-oracle-vm-management.pdf Read also articles from Tarry Singh on http://www.packtpub.com/ : Oracle VM Management : http://www.packtpub.com/article/oracle-vm-management-1 Extending Oracle VM Management : http://www.packtpub.com/article/oracle-vm-management-2 Hope you'll enjoy this book as a first approach to Oracle VM. For more information on Oracle VM : Oracle VM on n OTN : http://www.oracle.com/technology/products/vm/index.html Oracle VM Wiki : http://wiki.oracle.com/page/Oracle+VM Oracle VM on IBM System x : http://www-03.ibm.com/systems/x/solutions/infrastructure/erpcrm/oracle/virtualization.html

    Read the article

  • The Debut of Oracle Database Firewall at RSA 2011

    - by Troy Kitch
    We're very proud of the coverage and headlines Oracle Database Firewall made this past week during RSA Conference 2011 in San Francisco. In case you missed our previous post, we announced the availability of this latest addition to the Oracle Defense-in-Depth database security solutions. The announcement was picked up many publications including eWeek, CRN, InformationWeek and more. Here is just some of the press on this very important security solution: "It's rare to find a new product category these days, but I think a new product from Oracle fills the bill. In the crowded enterprise security field, that's saying something." Enterprise System Journal: A New Approach to Database Security By James E. Powell "Databases and the content they store are among the most valuable IT assets - and the most targeted by hackers. In an effort to help secure databases, Oracle today is launching the new Oracle Database Firewall as an approach to defend databases against SQL injection and other database attacks." Database Journal: Oracle Debuts Database Firewall (also appeared in InternetNews.com) By Sean Michael Kerner "Oracle Database Firewall understands SQL-statement formats, and can be configured to blacklist and whitelist traffic based on source. When it detects suspicious statements within SQL traffic -- ones that might indicate SQL injection attacks, for example -- it can replace them with neutral statements that will keep the session running without allowing potentially harmful traffic through." Network World: Oracle Database Firewall defuses SQL injection attacks By Tim Green "The firewall uses "SQL grammar analysis" to prevent SQL injection attacks and other attempts to grab information. The Oracle Database Firewall features white and black lists policies, exceptions and rules that mark the time of day, IP address, application and user." ZDNet: RSA Roundup: Oracle Database Firewall By Larry Dignan "The database giant announced Oracle Database Firewall on Feb. 14 at the RSA Conference in San Francisco. The firewall application establishes a "defensive perimeter" around databases by monitoring and enforcing normal application behavior in real-time, the company said." eWEEK: Oracle Database Firewall Delivers Vendor-Agnostic Security By Fahmida Y. Rashid

    Read the article

  • Using Table-Valued Parameters With SQL Server Reporting Services

    - by Jesse
    In my last post I talked about using table-valued parameters to pass a list of integer values to a stored procedure without resorting to using comma-delimited strings and parsing out each value into a TABLE variable. In this post I’ll extend the “Customer Transaction Summary” report example to see how we might leverage this same stored procedure from within an SQL Server Reporting Services (SSRS) report. I’ve worked with SSRS off and on for the past several years and have generally found it to be a very useful tool for building nice-looking reports for end users quickly and easily. That said, I’ve been frustrated by SSRS from time to time when seemingly simple things are difficult to accomplish or simply not supported at all. I thought that using table-valued parameters from within a SSRS report would be simple, but unfortunately I was wrong. Customer Transaction Summary Example Let’s take the “Customer Transaction Summary” report example from the last post and try to plug that same stored procedure into an SSRS report. Our report will have three parameters: Start Date – beginning of the date range for which the report will summarize customer transactions End Date – end of the date range for which the report will summarize customer transactions Customer Ids – One or more customer Ids representing the customers that will be included in the report The simplest way to get started with this report will be to create a new dataset and point it at our Customer Transaction Summary report stored procedure (note that I’m using SSRS 2012 in the screenshots below, but there should be little to no difference with SSRS 2008): When you initially create this dataset the SSRS designer will try to invoke the stored procedure to determine what the parameters and output fields are for you automatically. As part of this process the following dialog pops-up: Obviously I can’t use this dialog to specify a value for the ‘@customerIds’ parameter since it is of the IntegerListTableType user-defined type that we created in the last post. Unfortunately this really throws the SSRS designer for a loop, and regardless of what combination of Data Type, Pass Null Value, or Parameter Value I used here, I kept getting this error dialog with the message, "Operand type clash: nvarchar is incompatible with IntegerListTableType". This error message makes some sense considering that the nvarchar type is indeed incompatible with the IntegerListTableType, but there’s little clue given as to how to remedy the situation. I don’t know for sure, but I think that behind-the-scenes the SSRS designer is trying to give the @customerIds parameter an nvarchar-typed SqlParameter which is causing the issue. When I first saw this error I figured that this might just be a limitation of the dataset designer and that I’d be able to work around the issue by manually defining the parameters. I know that there are some special steps that need to be taken when invoking a stored procedure with a table-valued parameter from ADO .NET, so I figured that I might be able to use some custom code embedded in the report  to create a SqlParameter instance with the needed properties and value to make this work, but the “Operand type clash" error message persisted. The Text Query Approach Just because we’re using a stored procedure to create the dataset for this report doesn’t mean that we can’t use the ‘Text’ Query Type option and construct an EXEC statement that will invoke the stored procedure. In order for this to work properly the EXEC statement will also need to declare and populate an IntegerListTableType variable to pass into the stored procedure. Before I go any further I want to make one point clear: this is a really ugly hack and it makes me cringe to do it. Simply put, I strongly feel that it should not be this difficult to use a table-valued parameter with SSRS. With that said, let’s take a look at what we’ll have to do to make this work. Manually Define Parameters First, we’ll need to manually define the parameters for report by right-clicking on the ‘Parameters’ folder in the ‘Report Data’ window. We’ll need to define the ‘@startDate’ and ‘@endDate’ as simple date parameters. We’ll also create a parameter called ‘@customerIds’ that will be a mutli-valued Integer parameter: In the ‘Available Values’ tab we’ll point this parameter at a simple dataset that just returns the CustomerId and CustomerName of each row in the Customers table of the database or manually define a handful of Customer Id values to make available when the report runs. Once we have these parameters properly defined we can take another crack at creating the dataset that will invoke the ‘rpt_CustomerTransactionSummary’ stored procedure. This time we’ll choose the ‘Text’ query type option and put the following into the ‘Query’ text area: 1: exec('declare @customerIdList IntegerListTableType ' + @customerIdInserts + 2: ' EXEC rpt_CustomerTransactionSummary 3: @startDate=''' + @startDate + ''', 4: @endDate='''+ @endDate + ''', 5: @customerIds=@customerIdList')   By using the ‘Text’ query type we can enter any arbitrary SQL that we we want to and then use parameters and string concatenation to inject pieces of that query at run time. It can be a bit tricky to parse this out at first glance, but from the SSRS designer’s point of view this query defines three parameters: @customerIdInserts – This will be a Text parameter that we use to define INSERT statements that will populate the @customerIdList variable that is being declared in the SQL. This parameter won’t actually ever get passed into the stored procedure. I’ll go into how this will work in a bit. @startDate – This is a simple date parameter that will get passed through directly into the @startDate parameter of the stored procedure on line 3. @endDate – This is another simple data parameter that will get passed through into the @endDate parameter of the stored procedure on line 4. At this point the dataset designer will be able to correctly parse the query and should even be able to detect the fields that the stored procedure will return without needing to specify any values for query when prompted to. Once the dataset has been correctly defined we’ll have a @customerIdInserts parameter listed in the ‘Parameters’ tab of the dataset designer. We need to define an expression for this parameter that will take the values selected by the user for the ‘@customerIds’ parameter that we defined earlier and convert them into INSERT statements that will populate the @customerIdList variable that we defined in our Text query. In order to do this we’ll need to add some custom code to our report using the ‘Report Properties’ dialog: Any custom code defined in the Report Properties dialog gets embedded into the .rdl of the report itself and (unfortunately) must be written in VB .NET. Note that you can also add references to custom .NET assemblies (which could be written in any language), but that’s outside the scope of this post so we’ll stick with the “quick and dirty” VB .NET approach for now. Here’s the VB .NET code (note that any embedded code that you add here must be defined in a static/shared function, though you can define as many functions as you want): 1: Public Shared Function BuildIntegerListInserts(ByVal variableName As String, ByVal paramValues As Object()) As String 2: Dim insertStatements As New System.Text.StringBuilder() 3: For Each paramValue As Object In paramValues 4: insertStatements.AppendLine(String.Format("INSERT {0} VALUES ({1})", variableName, paramValue)) 5: Next 6: Return insertStatements.ToString() 7: End Function   This method takes a variable name and an array of objects. We use an array of objects here because that is how SSRS will pass us the values that were selected by the user at run-time. The method uses a StringBuilder to construct INSERT statements that will insert each value from the object array into the provided variable name. Once this method has been defined in the custom code for the report we can go back into the dataset designer’s Parameters tab and update the expression for the ‘@customerIdInserts’ parameter by clicking on the button with the “function” symbol that appears to the right of the parameter value. We’ll set the expression to: 1: =Code.BuildIntegerListInserts("@customerIdList ", Parameters!customerIds.Value)   In order to invoke our custom code method we simply need to invoke “Code.<method name>” and pass in any needed parameters. The first parameter needs to match the name of the IntegerListTableType variable that we used in the EXEC statement of our query. The second parameter will come from the Value property of the ‘@customerIds’ parameter (this evaluates to an object array at run time). Finally, we’ll need to edit the properties of the ‘@customerIdInserts’ parameter on the report to mark it as a nullable internal parameter so that users aren’t prompted to provide a value for it when running the report. Limitations And Final Thoughts When I first started looking into the text query approach described above I wondered if there might be an upper limit to the size of the string that can be used to run a report. Obviously, the size of the actual query could increase pretty dramatically if you have a parameter that has a lot of potential values or you need to support several different table-valued parameters in the same query. I tested the example Customer Transaction Summary report with 1000 selected customers without any issue, but your mileage may vary depending on how much data you might need to pass into your query. If you think that the text query hack is a lot of work just to use a table-valued parameter, I agree! I think that it should be a lot easier than this to use a table-valued parameter from within SSRS, but so far I haven’t found a better way. It might be possible to create some custom .NET code that could build the EXEC statement for a given set of parameters automatically, but exploring that will have to wait for another post. For now, unless there’s a really compelling reason or requirement to use table-valued parameters from SSRS reports I would probably stick with the tried and true “join-multi-valued-parameter-to-CSV-and-split-in-the-query” approach for using mutli-valued parameters in a stored procedure.

    Read the article

  • How can I use Windows Workflow for validation of a Silverlight application?

    - by Josh C.
    I want to use Windows Workflow to provide a validation service. The validation that will be provided may have multiple tiers with chaining and redirecting to other stages of validation. The application that will generate the data for validation is a Silverlight app. I imagine the validation will take longer than the blink of an eye, so I don't want to tie the user up. Instead, I would like the user to submit the current data for validation. If the validation happens quickly, the service will perform an asynchronous callback to the app. The viewmodel that made the call would receive the validation output and post into the view. If the validation takes a long time, the user can move forward in the Silverlight app, disregarding the potential output of the validation. The viewmodel that made the call would be gone. I expect there would be another viewmodel that would contain the current validation output in its model. The validation value would change causing the user to get a notification in smaller notifcation area. I can see how the current view's viewmodel would call the validation through the viewmodel that is containing the validation output, but I am concerned that the service call will timeout. Also, I think the user may have already changed the values from the original validation, invalidating the feedback. I am sure asynchronous validation is a problem solved many times over, I am looking to glean from your experience in solving this kind of problem. Is this the right approach to the problem, or is there a better way to approach this?

    Read the article

  • Oracle Insurance Gets Innovative with Insurance Business Intelligence

    - by nicole.bruns(at)oracle.com
    Oracle Insurance announced yesterday the availability of Oracle Insurance Insight 7.0, an insurance-specific data warehouse and business intelligence (BI) system that transforms the traditional approach to BI by involving business users in the creation and maintenance."Rapid access to business intelligence is essential to compete and thrive in today's insurance industry," said Srini Venkatasantham, vice president, Product Strategy, Oracle Insurance. "The adaptive data modeling approach of Oracle Insurance Insight 7.0, combined with the insurance-specific data model, offers global insurance companies a faster, easier way to get the intelligence they need to make better-informed business decisions." New Features in Oracle Insurance 7.0 include:"Adaptive Data Modeling" via the new warehouse palette: Gives business users the power to configure lines of business via an easy-to-use warehouse palette tool. Oracle Insurance Insight then automatically creates data warehouse elements - such as line-specific database structures and extract-transform-load (ETL) processes -speeding up time-to-value for BI initiatives. Out-of-the-box insurance models or create-from-scratch option: Includes pre-built content and interfaces for six Property and Casualty (P&C) lines. Additionally, insurers can use the warehouse palette to deploy any and all P&C or General Insurance lines of business from scratch, helping insurers support operations in any country.Leverages Oracle technologies: In addition to Oracle Business Intelligence Enterprise Edition, the solution includes Oracle Database 11g as well as Oracle Data Integrator Enterprise Edition 11g, which delivers Extract, Load and Transform (E-L-T) architecture and eliminates the need for a separate transformation server. Additionally, the expanded Oracle technology infrastructure enables support for Oracle Exadata. Martina Conlon, a Principal with Novarica's Insurance practice, and author of Business Intelligence in Insurance: Current State, Challenges, and Expectations says, "The need for continued investment by insurers in business intelligence capabilities is widely understood, and the industry is acting. Arming the business intelligence implementation with predefined insurance specific content, and flexible and configurable technology will get these projects up and running faster."Learn moreTo see a demo of the Oracle Insurance Insight system, click hereTo read the press announcement, click here

    Read the article

  • If some standards apply when "it depends" then should I stick with custom approaches?

    - by Travis J
    If I have an unconventional approach which works better than the industry standard, should I just stick with it even though in principal it violates those standards? What I am talking about is referential integrity for relational database management systems. The standard for enforcing referential integrity is to CASCADE delete. In practice, this is just not going to work all the time. In my current case, it does not. The alternative suggested is to either change the reference to NULL, DEFAULT, or just to take NO ACTION - usually in the form of a "soft delete". I am all about enforcing referential integrity. Love it. However, sometimes it just does not fully apply to use all the standards in practice. My approach has been to slightly abandon a small part of one of those practices which is the part about leaving "hanging references" around. Oops. The trade off is plentiful in this situation I believe. Instead of having deprecated data in the production database, a splattering of "soft delete" logic all across my controllers (and views sometimes depending on how far down the chain the soft delete occurred), and the prospect of queries taking longer and longer - instead of all that - I now have a recycle bin and centralized logic. The only tradeoff is that I must explicitly manage the possibility of "hanging references" which can be done through generics with one class. Any thoughts?

    Read the article

< Previous Page | 51 52 53 54 55 56 57 58 59 60 61 62  | Next Page >