Search Results

Search found 19449 results on 778 pages for 'query builder'.

Page 569/778 | < Previous Page | 565 566 567 568 569 570 571 572 573 574 575 576  | Next Page >

  • how to pass traffic for port 80 not through openvpn?

    - by moti
    Is there a way to configure OpenVPN clients to route traffic for HTTP port 80 and HTTPS port 443 directly (i.e. not through the VPN), but through the regular default gateway the clients have. All other traffic should go through the VPN. My client is running OpenVPN on Windows and my current configuration looks like this: client dev tun proto tcp remote my-server-2 1194 resolv-retry infinite nobind persist-key persist-tun ca ../keys/ca.crt cert ../keys/client1.crt key ../keys/client1.key ns-cert-type server verb 3 route-metric 1 show-net-up dhcp-renew dhcp-release route-delay 0 120 hand-window 180 management localhost 13010 management-hold management-query-passwords management-forget-disconnect management-signal auth-user-pass

    Read the article

  • Lucene and .NET Part I

    - by javarg
    I’ve playing around with Lucene.NET and trying to get a feeling of what was required to develop and implement a full business application using it. As you would imagine, many things are required for you to implement a robust solution for indexing content and searching it afterwards. Lucene is a great and robust solution for indexing content. It offers fast and performance enhanced search engine library available in Java and .NET. You will want to use this library in many particular scenarios: In Windows Azure, to support Full Text Search (a functionality not currently supported by SQL Azure) When storing files outside or not managed by your database (like in large document storage solutions that uses File System) When Full Text Search is not really what you need Lucene is more than a Full Text Search solution. It has several analyzers that let you process and search content in different ways (decomposing sentences, deriving words, removing articles, etc.). When deciding to implement indexing using Lucene, you will need to take into account the following: How content is to be indexed by Lucene and when. Using a service that runs after a specific interval Immediately when content changes When content is to available for searching / Availability of indexed content (as in real time content search) Immediately when content changes = near real time searching After a few minutes.. Ease of maintainability and development Some Technical Concerns.. When indexing content, indexes are locked for writing operations by the Index Writer. This means that Lucene is best designed to index content using single writer approach. When searching, Index Readers take a snapshot of indexes. This has the following implications: Setting up an index reader is a costly task. Your are not supposed to create one for each query or search. A good practice is to create readers and reuse them for several searches. The latter means that even when the content gets updated, you wont be able to see the changes. You will need to recycle the reader. In the second part of this post we will review some alternatives and design considerations.

    Read the article

  • Take Control of Workflow with Workflow Analyzer!

    - by user793553
    Take Control of Workflow with Workflow Analyzer! Immediate Analysis and Output of your EBS Workflow Environment The EBS Workflow Analyzer is a script that reviews the current Workflow Footprint, analyzes the configurations, environment, providing feedback, and recommendations on Best Practices and areas of concern. Go to Doc ID 1369938.1  for more details and script download with a short overview video on it. Proactive Benefits: Immediate Analysis and Output of Workflow Environment Identifies Aged Records Identifies Workflow Errors & Volumes Identifies looping Workflow items and stuck activities Identifies Workflow System Setup and configurations Identifies and Recommends Workflow Best Practices Easy To Add Tool for regular Workflow Maintenance Execute Analysis anytime to compare trending from past outputs The Workflow Analyzer presents key details in an easy to review graphical manner.   See the examples below. Workflow Runtime Data Table Gauge The Workflow Runtime Data Table Gauge will show critical (red), bad (yellow) and good (green) depending on the number of workflow items (WF_ITEMS).   Workflow Error Notifications Pie Chart A pie chart shows the workflow error notification types.   Workflow Runtime Table Footprint Bar Chart A pie chart shows the workflow error notification types and a bar chart shows the workflow runtime table footprint.   The analyzer also gives detailed listings of setups and configurations. As an example the workflow services are listed along with their status for review:   The analyzer draws attention to key details with yellow and red boxes highlighting areas of review:   You can extend on any query by reviewing the SQL Script and then running it on your own or making modifications for your own needs:     Find more details in these notes: Doc ID 1369938.1 Workflow Analyzer script for E-Business Suite Worklfow Monitoring and Maintenance Doc ID 1425053.1 How to run EBS Workflow Analyzer Tool as a Concurrent Request Or visit the My Oracle Support EBS - Core Workflow Community  

    Read the article

  • Simple recursive DNS resolver for debugging (app or VM)

    - by notpeter
    I have an issue which I believe is caused by incorrect DNS queries (doubled subdomains like _record.host.subdomain.tld.subdomain.tld) when querying for SRV records. So I need to an alternate DNS server with heavy logging so I can see every query (especially stupid ones), acting as a recursive resolver with the ability create records which override real DNS records so I can not only find the records it's (wrongly) looking for, but populate those records as well. I know I could install a DNS server on yet another linux box, but I feel like this is the sort of thing that someone may already setup a simple python script or single use vm just for this purpose.

    Read the article

  • Building Simple Workflows in Oozie

    - by dan.mcclary
    Introduction More often than not, data doesn't come packaged exactly as we'd like it for analysis. Transformation, match-merge operations, and a host of data munging tasks are usually needed before we can extract insights from our Big Data sources. Few people find data munging exciting, but it has to be done. Once we've suffered that boredom, we should take steps to automate the process. We want codify our work into repeatable units and create workflows which we can leverage over and over again without having to write new code. In this article, we'll look at how to use Oozie to create a workflow for the parallel machine learning task I described on Cloudera's site. Hive Actions: Prepping for Pig In my parallel machine learning article, I use data from the National Climatic Data Center to build weather models on a state-by-state basis. NCDC makes the data freely available as gzipped files of day-over-day observations stretching from the 1930s to today. In reading that post, one might get the impression that the data came in a handy, ready-to-model files with convenient delimiters. The truth of it is that I need to perform some parsing and projection on the dataset before it can be modeled. If I get more observations, I'll want to retrain and test those models, which will require more parsing and projection. This is a good opportunity to start building up a workflow with Oozie. I store the data from the NCDC in HDFS and create an external Hive table partitioned by year. This gives me flexibility of Hive's query language when I want it, but let's me put the dataset in a directory of my choosing in case I want to treat the same data with Pig or MapReduce code. CREATE EXTERNAL TABLE IF NOT EXISTS historic_weather(column 1, column2) PARTITIONED BY (yr string) STORED AS ... LOCATION '/user/oracle/weather/historic'; As new weather data comes in from NCDC, I'll need to add partitions to my table. That's an action I should put in the workflow. Similarly, the weather data requires parsing in order to be useful as a set of columns. Because of their long history, the weather data is broken up into fields of specific byte lengths: x bytes for the station ID, y bytes for the dew point, and so on. The delimiting is consistent from year to year, so writing SerDe or a parser for transformation is simple. Once that's done, I want to select columns on which to train, classify certain features, and place the training data in an HDFS directory for my Pig script to access. ALTER TABLE historic_weather ADD IF NOT EXISTS PARTITION (yr='2010') LOCATION '/user/oracle/weather/historic/yr=2011'; INSERT OVERWRITE DIRECTORY '/user/oracle/weather/cleaned_history' SELECT w.stn, w.wban, w.weather_year, w.weather_month, w.weather_day, w.temp, w.dewp, w.weather FROM ( FROM historic_weather SELECT TRANSFORM(...) USING '/path/to/hive/filters/ncdc_parser.py' as stn, wban, weather_year, weather_month, weather_day, temp, dewp, weather ) w; Since I'm going to prepare training directories with at least the same frequency that I add partitions, I should also add that to my workflow. Oozie is going to invoke these Hive actions using what's somewhat obviously referred to as a Hive action. Hive actions amount to Oozie running a script file containing our query language statements, so we can place them in a file called weather_train.hql. Starting Our Workflow Oozie offers two types of jobs: workflows and coordinator jobs. Workflows are straightforward: they define a set of actions to perform as a sequence or directed acyclic graph. Coordinator jobs can take all the same actions of Workflow jobs, but they can be automatically started either periodically or when new data arrives in a specified location. To keep things simple we'll make a workflow job; coordinator jobs simply require another XML file for scheduling. The bare minimum for workflow XML defines a name, a starting point, and an end point: <workflow-app name="WeatherMan" xmlns="uri:oozie:workflow:0.1"> <start to="ParseNCDCData"/> <end name="end"/> </workflow-app> To this we need to add an action, and within that we'll specify the hive parameters Also, keep in mind that actions require <ok> and <error> tags to direct the next action on success or failure. <action name="ParseNCDCData"> <hive xmlns="uri:oozie:hive-action:0.2"> <job-tracker>localhost:8021</job-tracker> <name-node>localhost:8020</name-node> <configuration> <property> <name>oozie.hive.defaults</name> <value>/user/oracle/weather_ooze/hive-default.xml</value> </property> </configuration> <script>ncdc_parse.hql</script> </hive> <ok to="WeatherMan"/> <error to="end"/> </action> There are a couple of things to note here: I have to give the FQDN (or IP) and port of my JobTracker and NameNode. I have to include a hive-default.xml file. I have to include a script file. The hive-default.xml and script file must be stored in HDFS That last point is particularly important. Oozie doesn't make assumptions about where a given workflow is being run. You might submit workflows against different clusters, or have different hive-defaults.xml on different clusters (e.g. MySQL or Postgres-backed metastores). A quick way to ensure that all the assets end up in the right place in HDFS is just to make a working directory locally, build your workflow.xml in it, and copy the assets you'll need to it as you add actions to workflow.xml. At this point, our local directory should contain: workflow.xml hive-defaults.xml (make sure this file contains your metastore connection data) ncdc_parse.hql Adding Pig to the Ooze Adding our Pig script as an action is slightly simpler from an XML standpoint. All we do is add an action to workflow.xml as follows: <action name="WeatherMan"> <pig> <job-tracker>localhost:8021</job-tracker> <name-node>localhost:8020</name-node> <script>weather_train.pig</script> </pig> <ok to="end"/> <error to="end"/> </action> Once we've done this, we'll copy weather_train.pig to our working directory. However, there's a bit of a "gotcha" here. My pig script registers the Weka Jar and a chunk of jython. If those aren't also in HDFS, our action will fail from the outset -- but where do we put them? The Jython script goes into the working directory at the same level as the pig script, because pig attempts to load Jython files in the directory from which the script executes. However, that's not where our Weka jar goes. While Oozie doesn't assume much, it does make an assumption about the Pig classpath. Anything under working_directory/lib gets automatically added to the Pig classpath and no longer requires a REGISTER statement in the script. Anything that uses a REGISTER statement cannot be in the working_directory/lib directory. Instead, it needs to be in a different HDFS directory and attached to the pig action with an <archive> tag. Yes, that's as confusing as you think it is. You can get the exact rules for adding Jars to the distributed cache from Oozie's Pig Cookbook. Making the Workflow Work We've got a workflow defined and have collected all the components we'll need to run. But we can't run anything yet, because we still have to define some properties about the job and submit it to Oozie. We need to start with the job properties, as this is essentially the "request" we'll submit to the Oozie server. In the same working directory, we'll make a file called job.properties as follows: nameNode=hdfs://localhost:8020 jobTracker=localhost:8021 queueName=default weatherRoot=weather_ooze mapreduce.jobtracker.kerberos.principal=foo dfs.namenode.kerberos.principal=foo oozie.libpath=${nameNode}/user/oozie/share/lib oozie.wf.application.path=${nameNode}/user/${user.name}/${weatherRoot} outputDir=weather-ooze While some of the pieces of the properties file are familiar (e.g., JobTracker address), others take a bit of explaining. The first is weatherRoot: this is essentially an environment variable for the script (as are jobTracker and queueName). We're simply using them to simplify the directives for the Oozie job. The oozie.libpath pieces is extremely important. This is a directory in HDFS which holds Oozie's shared libraries: a collection of Jars necessary for invoking Hive, Pig, and other actions. It's a good idea to make sure this has been installed and copied up to HDFS. The last two lines are straightforward: run the application defined by workflow.xml at the application path listed and write the output to the output directory. We're finally ready to submit our job! After all that work we only need to do a few more things: Validate our workflow.xml Copy our working directory to HDFS Submit our job to the Oozie server Run our workflow Let's do them in order. First validate the workflow: oozie validate workflow.xml Next, copy the working directory up to HDFS: hadoop fs -put working_dir /user/oracle/working_dir Now we submit the job to the Oozie server. We need to ensure that we've got the correct URL for the Oozie server, and we need to specify our job.properties file as an argument. oozie job -oozie http://url.to.oozie.server:port_number/ -config /path/to/working_dir/job.properties -submit We've submitted the job, but we don't see any activity on the JobTracker? All I got was this funny bit of output: 14-20120525161321-oozie-oracle This is because submitting a job to Oozie creates an entry for the job and places it in PREP status. What we got back, in essence, is a ticket for our workflow to ride the Oozie train. We're responsible for redeeming our ticket and running the job. oozie -oozie http://url.to.oozie.server:port_number/ -start 14-20120525161321-oozie-oracle Of course, if we really want to run the job from the outset, we can change the "-submit" argument above to "-run." This will prep and run the workflow immediately. Takeaway So, there you have it: the somewhat laborious process of building an Oozie workflow. It's a bit tedious the first time out, but it does present a pair of real benefits to those of us who spend a great deal of time data munging. First, when new data arrives that requires the same processing, we already have the workflow defined and ready to run. Second, as we build up a set of useful action definitions over time, creating new workflows becomes quicker and quicker.

    Read the article

  • An XML file or Database?

    - by webnoob
    I am re-writing a section of my site and am trying to decide how much of a rewrite this will be. At the moment I have a web service feed that generates an xml once per day. I then use this xml file on my website to generate the general structure. I am trying to decide if this information should be located in the database or stay in the xml file. The file can range from 4mb - 12mb. The files depth can go on and on so I have to recurse to find the data I want. I use the .NET serializer classes and store the serialized file in a global variable to avoid re-serializing it each time the page is loaded. My reasons for thinking a database would be better are: I would know exactly where I am in the file by using an internal ID so I wouldn't have to recurse the file to get information. I wouldn't have to load / serialize the XML and could just use my already open database connections. Searching for the data in the file would be quicker(?) as I would just perform an SQL query rather than re-cursing the file. Has anyone got any ideas which is better and which option uses more resources on the server or be quicker? EDIT: The file is read every time the web page is loaded (although only serialized once). It isn't written to by standard users (only by an admin task that runs in the middle of the night). This is my initial investigation before mocking up.

    Read the article

  • DNS Server Spoofed Request Amplification DDoS - Prevention

    - by Shackrock
    I've been conducting security scans, and a new one popped up for me: DNS Server Spoofed Request Amplification DDoS The remote DNS server answers to any request. It is possible to query the name servers (NS) of the root zone ('.') and get an answer which is bigger than the original request. By spoofing the source IP address, a remote attacker can leverage this 'amplification' to launch a denial of service attack against a third-party host using the remote DNS server. General Solution: Restrict access to your DNS server from public network or reconfigure it to reject such queries. I'm hosting my own DNS for my website. I'm not sure what the solution is here... I'm really looking for some concrete detailed steps to patch this, but haven't found any yet. Any ideas? CentOS5 with WHM and CPanel. Also see: http://securitytnt.com/dns-amplification-attack/

    Read the article

  • What kind of DNS and IIS configuration is needed to allow multiple domains to point to a multi-tenant web application?

    - by holiveira
    I'm developping a multi-tenant web application in ASP.NET MVC and it will provide my users the ability to have a custom subdomain pointing to their account page (like user.myapp.com). I already have it working by using a wildcard DNS entry and a code to query the database to load the user data based on the domain. I'm planning to offer the possibility of using custom domains, allowing the users to buy their own domains and use it instead of the subdomains that will be provided by default. I currently use DNSMadeEasy to host the DNS for the application main domain. I just don't know what kind of settings I must make to allow this feature to work, since the users will have domains hosted in several companies. Will I have to create my own nameservers and provide it to my users? What other things I must consider to implement it efficiently?

    Read the article

  • SQL Performance Problem IA64

    - by Vendoran
    We’ve got a performance problem in production. QA and DEV environments are 2 instances on the same physical server: Windows 2003 Enterprise SP2, 32 GB RAM, 1 Quad 3.5 GHz Intel Xeon X5270 (4 cores x64), SQL 2005 SP3 (9.0.4262), SAN Drives Prod: Windows 2003 Datacenter SP2, 64 GB RAM, 4 Dual Core 1.6 GHz Intel Family 80000002, Model 6 Itanium (8 cores IA64), SQL 2005 SP3 (9.0.4262), SAN Drives, Veritas Cluster I am seeing excessive Signal Wait Percentages ( 250%) and Page Reads /s (50) and Page Writes /s (25) are both high occasionally. I did test this query on both QA and PROD and it has the same execution plan and even the same stats: SELECT top 40000000 * INTO dbo.tmp_tbl FROM dbo.tbl GO Scan count 1, logical reads 429564, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. As you can see it’s just logical reads, however: QA: 0:48 Prod: 2:18 So It seems like a processor related issue, however I’m not sure where to go next, any ideas? Thanks, Aaron

    Read the article

  • Redircting to a url that has a question mark in it?

    - by dkmojo
    I have a somewhat strange problem. A client has moved their site to Wordpress. They use a service for link exchanges that has a Wordpress plugin. The issue is that the new links pages use a query string to display the correct content and I cannot figure out how to redirect the old URLs correctly. Old URLs look like this: domain.com/link/category-name.html The plugin makes them look like this in WP: domain.com/links/?page=category-name.html How in the world can I get the redirect to work properly? Here's what I have tried: Redirect 301 /link/actors.html http://www.artisticimages.biz/links/?page=actors.html Redirect 301 /link/actors.html http://www.artisticimages.biz/links/%3Fpage=actors.html Redirect 301 /link/actors.html http://www.artisticimages.biz/links/\?page=actors.html But none of those have worked. Any help is greatly appreciated!

    Read the article

  • LDAP search filter for Active Directory

    - by Francesco De Vittori
    Hello, I'm trying to look for users inside Active Directory through a LDAP query. Basically I'm searching for the user in this way: Search DN: dc=mydomain, dc=com Filter: (sAMAccountName=USER) where USER is replaced with the provided username. Now if USER is only the username without domain (for ex. "Joe") this works fine. However I receive them in the form (domain\username, for ex. "myDomain\Joe") and obviously the search fails. I see two ways: using a regex inside the Search Filter to discard the domain using a completely different search filter I'm no LDAP expert and I don't even know if it's possible to use regular expressions inside the search filters. Does anyone know if it's possible and how? P.S. I cannot pre-process the username to strip the domain. This cannot be changed, as it's all part of a large system.

    Read the article

  • Vagrant with VMware ESXi Provider

    - by Adam
    I am attempting to work with Vagrant and the vagrant-vsphere plugin to deploy machines to my VMware ESXi server. Has anyone had any luck in getting this to work? I realize that vagrant-vsphere is still 0.0.1 though and there are bound to be bugs. Specifially, Vagrant and vagrant-vsphere appear to fail during the vsphere connection, however, SSH and CLI access is enabled and the vSphere powershell is able to connect without an issue. INFO warden: Calling action: # ERROR warden: Error occurred: VagrantPlugins::VSphere::Errors::VSphereError The hostd log file on the ESXi server shows the Vagrant doing an SearchIndex query.

    Read the article

  • nginx points the sub-directory of an alias folder to the base directory

    - by Starry
    I am new to Nginx. Now I have a confusion on nginx configurations: My web site contains folders in different locations: location / { root /Path1 } location ^~ /personal { alias /Path2 } When I query http://mysite/personal, I am accessing the content of /Path2 instead of /Path1 Now I want to add a sub-directory in /personal with specific configurations, so I add: location /personal/download { autoindex on; } But I got 404 error when querying http://mysite/personal/download. According to the error log, I am directed to /Path1/personal/download, which is not correct. How can I configure nginx, such that all access to http://mysite/personal/* will be directed to the same directory in /Path2?

    Read the article

  • How to grow from single server setup

    - by Jenkz
    I'm looking for resources on how to grow our server setup. We currently have one dedicated server with Rackspace in the UK of the following spec: HPDL385_G2_PrevGen HP Single Dual Core Opteron 2214 (2.2Ghz) 4GB RAM 2x 10,000 SCSI Drives in RAID 1 Our traffic is up to 550,000 UVs per month. The site runs off a PHP and MySQL setup. The database gets an absolute hammering, we have many complex queries joining multilpe tables. We are using APC for PHP caching. I'm getting to the stage where I've done as much DB and query optimisation as I can and wonder what the next step should be...... I've looked at memcache, but I've got the impression that his requires a large amount of RAM and ideally a dedicated box.... So is the next step to have two boxes; one for database, one for Apache? Or is there a step I've overlooked. Our load is usually around the 2 mark, but right now it's up at 20!

    Read the article

  • How to grow from single server setup

    - by Jenkz
    I'm looking for resources on how to grow our server setup. We currently have one dedicated server with Rackspace in the UK of the following spec: HPDL385_G2_PrevGen HP Single Dual Core Opteron 2214 (2.2Ghz) 4GB RAM 2x 10,000 SCSI Drives in RAID 1 Our traffic is up to 550,000 UVs per month. The site runs off a PHP and MySQL setup. The database gets an absolute hammering, we have many complex queries joining multilpe tables. We are using APC for PHP caching. I'm getting to the stage where I've done as much DB and query optimisation as I can and wonder what the next step should be...... I've looked at memcache, but I've got the impression that his requires a large amount of RAM and ideally a dedicated box.... So is the next step to have two boxes; one for database, one for Apache? Or is there a step I've overlooked. Our load is usually around the 2 mark, but right now it's up at 20!

    Read the article

  • jump to page of a pdf in google docs / drive / apps

    - by Aaron - Solution Evangelist
    i want to jump to a specific page of a pdf file via the google docs via the editor url https://docs.google.com/file/d/xxx/edit or the embed url https://docs.google.com/file/d/xxx/preview i am not looking to use the http://docs.google.com/gview?url= referenced in the stackoverflow question how to open specific page on Google's docs viewer as i want to do this for documents where authentication is required the the document is not available via public url. is there some way of appending an anchor (i would have expected it to be https://docs.google.com/file/d/xxx/preview#10) or a query (e.g. https://docs.google.com/file/d/xxx/preview?page=10) to the google docs / drive / apps viewer?

    Read the article

  • Point dns server to root dns servers [duplicate]

    - by Dhaksh
    This question already has an answer here: What is a glue record? 3 answers Why does DNS work the way it does? 4 answers I have setup a custom authoritative only DNS server using bind9. Its a Master ans Slave method. Assume DNS Servers are: ns1.customdnsserver.com [192.168.91.129] ==> Master ns2.customdnsserver.com [192.168.91.130] ==> Slave Now i will host few shared hosting websites in my own web server. Where i will link above Nameservers to my domains in shared hosting. My Question is: How do i tell root DNS servers about my own authoritative only DNS server? So that when someone queries for domain www.example.com and if the domain's website is hosted in my shared hosting i want root servers to point the query to my own DNS Server so that the www.example.com get resolved for IP address.

    Read the article

  • Powershell (sqlps) lastbackupdate not changing despite having run a sqlserver backup

    - by user1666376
    I'm using Powershell to check last backup times across all our sqlserver databases. This seems to work really well, but I've got a question If I run this (a cut-down version of the actual script): dir SQLSERVER:\SQL\Server1\default\databases | select parent, name, lastbackupdate I get: Parent Name LastBackupDate ------ ---- -------------- [Server1] ADBA 10/09/2012 21:15:37 [Server1] ReportServer 10/09/2012 21:00:17 [Server1] ReportServerTempDB 10/09/2012 21:00:18 [Server1] db1 10/09/2012 21:15:35 If I then run a sql backup of the Server1 default instance, and run the same query the last backup date doesn't change: PS C:\temp> dir SQLSERVER:\SQL\Server1\default\databases | select parent, name, lastbackupdate Parent Name LastBackupDate ------ ---- -------------- [Server1] ADBA 10/09/2012 21:15:37 [Server1] ReportServer 10/09/2012 21:00:17 [Server1] ReportServerTempDB 10/09/2012 21:00:18 [Server1] db1 10/09/2012 21:15:35 ..but if I open a new powershell window, it shows the backup I just took: PS SQLSERVER:\> dir SQLSERVER:\SQL\Server1\default\databases | select parent, name, lastbackupdate Parent Name LastBackupDate ------ ---- -------------- [server1] ADBA 12/09/2012 09:03:23 [server1] ReportServer 12/09/2012 08:48:03 [server1] ReportServerTempDB 12/09/2012 08:48:04 [server1] db1 12/09/2012 09:03:21 My guess is that this is expected behaviour, but could anybody show me where it's documented/explained - I just want to understand what's going on. This is running the SQlps which came with 2008, against a 2008 instance. Thanks Matt

    Read the article

  • Web authentication using LDAP and Apache?

    - by Stephen R
    I am working on a project of setting up a web administered inventory database for my work (or if they don't want it then i'll enjoy learning about it) and hit the problem of allowing only authorized users to access the website (In its testing/development phase, I allow all people to navigate to the website to add entries to the database and query it). I am trying to make it so only particular users in the domain (Active Directory) are allowed to access the website after they are queried about their credentials. I read that Apache (I am using a LAMP server) has a means of asking visitors to the website to provide LDAP credentials in order to gain access to the site, but I wasn't sure if that was exactly what I was looking for. If anyone has experience in the LDAP configurations for Apache that I mentioned or any other means of securely authenticating with websites I would greatly appreciate advice or a direction to go Thank you!

    Read the article

  • Tip 13 : Kill a process using C#, from local to remote

    - by StanleyGu
    1. My first choice is always to try System.Diagnostics to kill a process 2. The first choice works very well in killing local processes. I thought the first choice should work for killing remote process too because process.kill() method is overloaded with second argument of machine name. I pass process name plus remote machine name and call the process.kill() method 3. Unfortunately, it gives me error message of "Feature is not supported for remote machines.". Apparently, you can query but not kill a remote process using Process class in System.Diagnostics. The MSDN library document explicitly states that about Process class: Provides access to local and remote processes and enables you to start and stop local system processes. 4. I try my second choice: using System.Management to kill a process running on a remote machine. Make sure add references to System.Management.dll and System.Management.Instrumentation.dll 5. The second choice works very well in killing a remote process. Just need to make sure the account running your program must be configured to have permission to kill a process running on the remote machine.  

    Read the article

  • Find The Bug

    - by Alois Kraus
    What does this code print and why?             HashSet<int> set = new HashSet<int>();             int[] data = new int[] { 1, 2, 1, 2 };             var unique = from i in data                          where set.Add(i)                          select i;   // Compiles to: var unique = Enumerable.Where(data, (i) => set.Add(i));             foreach (var i in unique)             {                 Console.WriteLine("First: {0}", i);             }               foreach (var i in unique)             {                 Console.WriteLine("Second: {0}", i);             }   The output is: First: 1 First: 2 Why is there no output of the second loop? The reason is that LINQ does not cache the results of the collection but it does recalculate the contents for every new enumeration again. Since I have used state (the Hashset does decide which entries are part of the output) I do arrive with an empty sequence since Add of the Hashset will return false for all values I have already passed in leaving nothing to return a second time. The solution is quite simple: Use the Distinct extension method or cache the results by calling .ToList() or ToArray() for the result of the LINQ query. Lession Learned: Do never forget to think about state in Where clauses!

    Read the article

  • Game Components, Game Managers and Object Properties

    - by George Duckett
    I'm trying to get my head around component based entity design. My first step was to create various components that could be added to an object. For every component type i had a manager, which would call every component's update function, passing in things like keyboard state etc. as required. The next thing i did was remove the object, and just have each component with an Id. So an object is defined by components having the same Ids. Now, i'm thinking that i don't need a manager for all my components, for example i have a SizeComponent, which just has a Size property). As a result the SizeComponent doesn't have an update method, and the manager's update method does nothing. My first thought was to have an ObjectProperty class which components could query, instead of having them as properties of components. So an object would have a number of ObjectProperty and ObjectComponent. Components would have update logic that queries the object for properties. The manager would manage calling the component's update method. This seems like over-engineering to me, but i don't think i can get rid of the components, because i need a way for the managers to know what objects need what component logic to run (otherwise i'd just remove the component completely and push its update logic into the manager). Is this (having ObjectProperty, ObjectComponent and ComponentManager classes) over-engineering? What would be a good alternative?

    Read the article

  • MySQL Connect Only 10 Days Away - Focus on InnoDB Sessions

    - by Bertrand Matthelié
    Time flies and MySQL Connect is only 10 days away! You can check out the full program here as well as in the September edition of the MySQL newsletter. Mat recently blogged about the MySQL Cluster sessions you’ll have the opportunity to attend, and below are those focused on InnoDB. Remember you can plan your schedule with Schedule Builder. Saturday, 1.00 pm, Room Golden Gate 3: 10 Things You Should Know About InnoDB—Calvin Sun, Oracle InnoDB is the default storage engine for Oracle’s MySQL as of MySQL Release 5.5. It provides the standard ACID-compliant transactions, row-level locking, multiversion concurrency control, and referential integrity. InnoDB also implements several innovative technologies to improve its performance and reliability. This presentation gives a brief history of InnoDB; its main features; and some recent enhancements for better performance, scalability, and availability. Saturday, 5.30 pm, Room Golden Gate 4: Demystified MySQL/InnoDB Performance Tuning—Dimitri Kravtchuk, Oracle This session covers performance tuning with MySQL and the InnoDB storage engine for MySQL and explains the main improvements made in MySQL Release 5.5 and Release 5.6. Which setting for which workload? Which value will be better for my system? How can I avoid potential bottlenecks from the beginning? Do I need a purge thread? Is it true that InnoDB doesn't need thread concurrency anymore? These and many other questions are asked by DBAs and developers. Things are changing quickly and constantly, and there is no “silver bullet.” But understanding the configuration setting’s impact is already a huge step in performance improvement. Bring your ideas and problems to share them with others—the discussion is open, just moderated by a speaker. Sunday, 10.15 am, Room Golden Gate 4: Better Availability with InnoDB Online Operations—Calvin Sun, Oracle Many top Web properties rely on Oracle’s MySQL as a critical piece of infrastructure for serving millions of users. Database availability has become increasingly important. One way to enhance availability is to give users full access to the database during data definition language (DDL) operations. The online DDL operations in recent MySQL releases offer users the flexibility to perform schema changes while having full access to the database—that is, with minimal delay of operations on a table and without rebuilding the entire table. These enhancements provide better responsiveness and availability in busy production environments. This session covers these improvements in the InnoDB storage engine for MySQL for online DDL operations such as add index, drop foreign key, and rename column. Sunday, 11.45 am, Room Golden Gate 7: Developing High-Throughput Services with NoSQL APIs to InnoDB and MySQL Cluster—Andrew Morgan and John Duncan, Oracle Ever-increasing performance demands of Web-based services have generated significant interest in providing NoSQL access methods to MySQL (MySQL Cluster and the InnoDB storage engine of MySQL), enabling users to maintain all the advantages of their existing relational databases while providing blazing-fast performance for simple queries. Get the best of both worlds: persistence; consistency; rich SQL queries; high availability; scalability; and simple, flexible APIs and schemas for agile development. This session describes the memcached connectors and examines some use cases for how MySQL and memcached fit together in application architectures. It does the same for the newest MySQL Cluster native connector, an easy-to-use, fully asynchronous connector for Node.js. Sunday, 1.15 pm, Room Golden Gate 4: InnoDB Performance Tuning—Inaam Rana, Oracle The InnoDB storage engine has always been highly efficient and includes many unique architectural elements to ensure high performance and scalability. In MySQL 5.5 and MySQL 5.6, InnoDB includes many new features that take better advantage of recent advances in operating systems and hardware platforms than previous releases did. This session describes unique InnoDB architectural elements for performance, new features, and how to tune InnoDB to achieve better performance. Sunday, 4.15 pm, Room Golden Gate 3: InnoDB Compression for OLTP—Nizameddin Ordulu, Facebook and Inaam Rana, Oracle Data compression is an important capability of the InnoDB storage engine for Oracle’s MySQL. Compressed tables reduce the size of the database on disk, resulting in fewer reads and writes and better throughput by reducing the I/O workload. Facebook pushes the limit of InnoDB compression and has made several enhancements to InnoDB, making this technology ready for online transaction processing (OLTP). In this session, you will learn the fundamentals of InnoDB compression. You will also learn the enhancements the Facebook team has made to improve InnoDB compression, such as reducing compression failures, not logging compressed page images, and allowing changes of compression level. Not registered yet? You can still save US$ 300 over the on-site fee – Register Now!

    Read the article

  • Oracle Linux Partner Pavilion Spotlight

    - by Ted Davis
    With the first day of Oracle OpenWorld starting in less than a week, we wanted to showcase some of our premier partners exhibiting in the Oracle Linux Partner Pavilion ( Booth #1033) this year. We have Independent Hardware Vendors, Independent Software Vendors and Systems Integrators that show the breadth of support in the Oracle Linux and Oracle VM ecosystem. We'll be highlighting partners all week so feel free to come back check us out. Centrify delivers integrated software and cloud-based solutions that centrally control, secure and audit access to cross-platform systems, mobile devices and applications by leveraging the infrastructure organizations already own. From the data center and into the cloud, more than 4,500 organizations, including 40 percent of the Fortune 50 and more than 60 Federal agencies, rely on Centrify's identity consolidation and privilege management solutions to reduce IT expenses, strengthen security and meet compliance requirements. Visit Centrify at Oracle OpenWorld 2102 for a look at Centrify Suite and see how you can streamline security management on Oracle Linux.  Unify identities across the enterprise and remove the pain and security issues associated with managing local user accounts by leveraging Active Directory Implement a least-privilege security model with flexible, role-based controls that protect privileged operations while still granting users the privileges they need to perform their job Get a central, global view of audited user sessions across your Oracle Linux environment  "Data Intensity's cloud infrastructure leverages Oracle VM and Oracle Linux to provide highly available enterprise application management solutions.  Engineers will be available to answer questions about and demonstrate the technology, including management tools, configuration do's and don'ts, high availability, live migration, integrating the technology with Oracle software, and how the integrated support process works."    Mellanox’s end-to-end InfiniBand and Ethernet server and storage interconnect solutions deliver the highest performance, efficiency and scalability for enterprise, high-performance cloud and web 2.0 applications. Mellanox’s interconnect solutions accelerate Oracle RAC query throughput performance to reach 50Gb/s compared to TCP/IP based competing solutions that cap off at less than 12Gb/s. Mellanox solutions help Oracle’s Exadata to deliver 10X performance boost at 50% Hardware cost making it the world’s leading database appliance. Thanks for reviewing today's Partner spotlight. We will highlight new partners each day this week leading up to Oracle OpenWorld.

    Read the article

  • 7 reasons you had to be at JavaOne Latin America 2012

    - by Bruno.Borges
    Yesterday was 12/12/12, and everybody went crazy on Twitter with cool memes like this one. And maybe you are now wondering why I mentioned 7 (seven) on the blog title. Because I want to play numbers? Yes! Today is 7 days after JavaOne Latin America 2012 is over (... and I had to figure out an excuse for taking so long to blog about it...). So unless you were at JavaOne Latin America this year, here are 7 things you missed: OTN Lounge mini-theatreThere was a mini-theatre holding several lightning talks. We had people from SouJava JUG, GoJava JUG, Globalcode, and several other Java gurus and companies running demos, talks, and even more. For example, @drspockbr talked about the ScrumToys project, that demonstrates the power of JSF. Hands On Lab for JAX-RS and WebSocketsOne of the cool things to do during JavaOne is to come to these Hands On labs and really do something using new technologies with the help of experts. This one in particular, was covered by me, Arun Gupta, and Reza Rahman. The HOL had more people than laptops (and we had 48 laptops!) interested on understanding and learning about the new stuff that is coming within Java EE 7. Things like JAX-RS, Server-sent Events and WebSockets. Hey, if you want to try this HOL by yourself, it is available on Github, so go for it! If you have questions, just let me know! Java Community KeynoteThis keynote presented a lot of cool things like startups using Java in their projects, the Duke Awards, SouJava winning the JCP Outstanding Award, the Java Band, and even more! It was really a space where the Java community could present what they are doing and what they want to do. There's a lot of interest on the Adopt-a-JSR program and the Adopt-OpenJDK. There's also an Adopt-a-JavaEE-JSR program! Take a look if you want to participate and Make the Future Java. Java EE (JMS, JAX-RS) sessions from Reza Rahman, the HeavyMetal guyReza is a well know professional and Java EE enthusiast from the communitty who just joined Oracle this year. His sessions were very well attended, perhaps because of a high interest on the new things coming to Java EE 7 like JMS 2.0 and JAX-RS 2.0. If you want to look at what he did at this JavaOne edition, read his blog post. By the way, if you like Java and heavymetal, you should follow him on Twitter as well! :-) Java EE (WebSockets, HTML5) sessions from Arun Gupta, the GlassFish guyIf you don't know Arun Gupta, no worries. You will have time to know about him while you read his Java EE 6 Pocket Guide. Arun has been evangelizing Java EE for a long time, and is now spreading his word about the new upcoming version Java EE 7. He gave one talk about HTML5 Productivity on the Java EE 7 platform, and another one on building web apps with WebSockets. Pretty neat! Arun blogged about JavaOne Latin America as well. Read it here. Java Embedded and JavaFXIf there are two things that are really trending in the Java World right now besides Java EE 7, certainly they are JavaFX and Java Embedded. There were 14 talks covering Java Embedded, from Java Cards to Raspberry.pi, from Java ME to Java on your TV with Ginga-J. The Internet of Things is becoming true, and Java is the only platform today that can connect it all in an standardized and concise way. JavaFX gained a lot of attention too. There were 8 sessions covering what the platform has to offer in terms of Rich User Experience. The JavaFX Scene Builder is an awesome tool to start playing designing an UI, and coding for JavaFX is like coding Swing with 8 hands, one holding your coffee cup. You can achieve a lot, with your two hands (unless, you really have 8 hands, then you can achieve 4 times more :-). If you want to read more about JavaFX, go to Stephen Chin's blog post. GlassFish and Friends Party, 1st edition at JavaOne Lating AmericaThis is probably the thing that I'm most proud. We brought to Brasil the tradition of holding a happy hour for all GlassFish, Java EE friends. This party started almost 7 years ago in San Francisco, and it was about time to bring it to Brazil! The party happened on Tuesday night, right after JavaOne General Keynote, at the Tribeca Pub. We had about 80 attendees and met a lot of Java EE developers there! People from JUGs, Oracle, Locaweb and Red Hat showed up too, including some execs from Oracle that didn't resist and could not miss a party like this one.Lots of caipirinhas, beer and food to everyone, some cool music... even The Fish walking around the party with Juggy!You can see more photos from the party on an album I shared with the recently created GlassFish Brasil community on Google+ here (but you may be more interested in joining the GlassFish english community). There's also more pictures that Arun took and shared on this link. So now you may want to consider coming to Brazil next year! Java EE 7 is on its way, and Brazil is happily and patiently waiting for it, with a lot of enthusiasm. By the way, GlassFish and Java EE 6 just celebrated a Happy Birthday!

    Read the article

< Previous Page | 565 566 567 568 569 570 571 572 573 574 575 576  | Next Page >