Search Results

Search found 56525 results on 2261 pages for 'webjawns com'.

Page 750/2261 | < Previous Page | 746 747 748 749 750 751 752 753 754 755 756 757  | Next Page >

  • Failed to load url in Dom Document?

    - by Lakhan
    im trying to get latitude and longtitude from google map by giving address. But its giving error.when i directly copy url to browser url bar.its giving correct result.If anybody know the answer .Plz reply. Thanx in advance Here is my code. <?php $key = '[AIzaSyAoPgQlfKsBKQcBGB01cl8KmiPee3SmpU0]'; $opt = array ( 'address' => urlencode('Kolkata,India, ON') , 'output' => 'xml' ); $url = 'http://maps.google.com/maps/geo?q='.$opt['address'].'&output='.$opt['output'].'&oe=utf8&key='.$key; $dom = new DOMDocument(); $dom->load($url); $xpath = new DomXPath($dom); $xpath->registerNamespace('ge', 'http://earth.google.com/kml/2.0'); $statusCode = $xpath->query('//ge:Status/ge:code'); if ($statusCode->item(0)->nodeValue == '200') { $pointStr = $xpath->query('//ge:coordinates'); $point = explode(",", $pointStr->item(0)->nodeValue); $lat = $point[1]; $lon = $point[0]; echo '<pre>'; echo 'Lat: '.$lat.', Lon: '.$lon; echo '</pre>'; } ?> Following error has occured: Warning: DOMDocument::load(http://maps.google.com/maps/geo? q=Kolkata%252CIndia%252C%2BON&output=xml&oe=utf8&key=%5BAIzaSyAoPgQlfKsBKQcBGB01cl8KmiPee3SmpU0%5D) [domdocument.load]: failed to open stream: HTTP request failed! HTTP/1.0 400 Bad Request in C:\xampp\htdocs\draw\draw.php on line 19 Warning: DOMDocument::load() [domdocument.load]: I/O warning : failed to load external entity "http://maps.google.com/maps/geo?q=Kolkata%252CIndia%252C%2BON&output=xml&oe=utf8&key=%5BAIzaSyAoPgQlfKsBKQcBGB01cl8KmiPee3SmpU0%5D" in C:\xampp\htdocs\draw\draw.php on line 19 Notice: Trying to get property of non-object in C:\xampp\htdocs\draw\draw.php on line 26

    Read the article

  • Sample twitter App

    - by Jack
    I am now running my code on a web hosting service http://xtreemhost.com/ <?php function updateTwitter($status) { $username = 'xxxxxx'; $password = 'xxxx'; $url = 'http://twitter.com/statuses/update.xml'; $postargs = 'status='.urlencode($status); $responseInfo=array(); $ch = curl_init($url); curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, 2); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt ($ch, CURLOPT_POST, true); // Give CURL the arguments in the POST curl_setopt ($ch, CURLOPT_POSTFIELDS, $postargs); // Set the username and password in the CURL call curl_setopt($ch, CURLOPT_USERPWD, $username.':'.$password); // Set some cur flags (not too important) $response = curl_exec($ch); if($response === false) { echo 'Curl error: ' . curl_error($ch); } else { echo 'Operation completed without any errors<br/>'; } // Get information about the response $responseInfo=curl_getinfo($ch); // Close the CURL connection curl_close($ch); // Make sure we received a response from Twitter if(intval($responseInfo['http_code'])==200){ // Display the response from Twitter echo $response; }else{ // Something went wrong echo "Error: " . $responseInfo['http_code']; } curl_close($ch); } updateTwitter("Just finished a sweet tutorial on http://brandontreb.com"); ?> I get the following error now Curl error: Couldn't resolve host 'api.twitter.com' Error: 0 Please somebody solve my problem

    Read the article

  • Can I create a Google calendar for a user in a hosted domain using the admin credentials

    - by user351013
    I use the admin credentials for all of my interactions with the google api and I can retrieve\create\update\delete events from and for all of my hosted domain users. However, when I go to create a calendar for a hosted domain user, the calendar is created in the admins space. In the example below the GoogleUserName does NOT match the GoogleAccount. The postUri would look similar to : http://www.google.com/calendar/feeds/[email protected]/owncalendars/full and the GoogleUserName is admin@hosteddomain.com. The api creates a calendar but it is in the admins space. CalendarService service = new CalendarService("Test"); service.setUserCredentials(GoogleUserName, GooglePassword); CalendarEntry calendar = new CalendarEntry(); calendar.TimeZone = "America/Chicago"; calendar.Title.Text = Title; calendar.Summary.Text = Description; calendar.Color = Color; calendar.Selected = true; calendar.Hidden = false; Uri postUri = new Uri(String.Format("http://www.google.com/calendar/feeds/{0}/owncalendars/full", GoogleAccount)); CalendarEntry createdCalendar = (CalendarEntry)service.Insert(postUri, calendar); The documentation does specify to use the users credentials however the documentation is not specific to hosted domains a great deal of the time and as such I am always attempting trial and error when trying interactions. That I can use all of the CRUD on the user's events themselves using the admin credentials leaves me to believe that it might be possible.

    Read the article

  • How to manipulate string to delete quotes?

    - by user1751581
    I am trying to manipulate a string so that any quotes (") within <a href> and <\a> get taken out... Sorry if its been asked before but I just can't get it to work! By the way, I am POSTing the data from a form and then manipulating the string. This is basically html but its in the form of a string, and I want to take out quotes on things like images and links... Another thing is, I do not want to escape the quotes because that would break the link... And the whole point is that the html can be used and work fine... But now, something is automatically creating a second set of quotes inside the normal quotes, like this: <a href="\"http://www.example.com/\""></a> Example input would be: <p><a href="http://www.example.com">example</a></p> Heres how it appears when I echo it however: <p><a href=\"http://www.example.com\">example</a></p> Heres how I want it to look: <p><a href="http://www.example.com">example</a></p> So I would actually be trying to get rid of the (/) my bad...

    Read the article

  • WPF problem modify style of images in multiple Button Templates

    - by user556415
    I'm trying to create a control panel for a video camera, the panel has buttons for up, down, left and right etc. Each camera function up, down, left and right is represented by three images (see code below) for the left side, middle and right side. The control panel is circular so the corner images kind of overlap (its complicated to explain this without a visual). When I click on up for example I have to hide the initial three images (leftside, middle and right side) and display another three images for left , middle and right that indicate that the button is pressed. I am achieving this by having a grid inside a button template. The problem I have is that for the corner images for the control there are really four images that represent this. For example for the top left corner the four images would be represent 1. Top not clicked. 2. Top Clicked and 3. Left Not clicked and 4. Left Clicked. My problem is if I need to make the images contained within the Top button have precedence when the top control is clicked or the images in the left button have precedence when the left button is clicked. So it's like I want to modify the left button's image visible property when the top button is clicked and vise versa. This is really difficult to explain so I apologize if it makes little sense but I can email the source code on request if anyone is interested in my predicament. <Grid> <Canvas> <!--<StackPanel>--> <Button Name="TopSide" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" Height="34" Width="102" Canvas.Left="97" Canvas.Top="60" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" > <Button.Template> <ControlTemplate TargetType="{x:Type Button}"> <Grid Width="100"> <Canvas> <Image Name="TopRightNormal" Source="Resources/topright_off.jpg" Height="34" Width="34" Canvas.Left="66"></Image> <Image Name="TopRightDown" Source="Resources/topright_down.jpg" Height="34" Width="34" Canvas.Left="66" Visibility="Hidden" ></Image> <Image Name="TopNormal" Source="Resources/topcenter_off.jpg" Height="34" Width="34" Canvas.Left="34" /> <Image Name="TopPressed" Source="Resources/topcenter_down.jpg" Height="34" Width="34" Canvas.Left="34" Visibility="Hidden" /> <Image Name="TopDisabled" Source="Resources/topcenter_off.jpg" Height="34" Width="34" Canvas.Left="34" Visibility="Hidden" /> <Image Name="TopLeftNormal" Source="Resources/topleft_off.jpg" Height="34" Width="34" Canvas.Left="2" ></Image> <Image Name="TopLeftDown" Opacity="0" Source="Resources/topleft_down.jpg" Height="34" Width="34" Canvas.Left="2" Visibility="Hidden" ></Image> </Canvas> </Grid> <ControlTemplate.Triggers> <Trigger Property="IsPressed" Value="True"> <Setter TargetName="TopNormal" Property="Visibility" Value="Hidden" /> <Setter TargetName="TopPressed" Property="Visibility" Value="Visible" /> <Setter TargetName="TopRightNormal" Property="Visibility" Value="Hidden" /> <Setter TargetName="TopRightDown" Property="Visibility" Value="Visible" /> <Setter TargetName="TopLeftNormal" Property="Visibility" Value="Hidden" /> <Setter TargetName="TopLeftDown" Property="Visibility" Value="Visible" /> <Setter TargetName="TopLeftDown" Property="Opacity" Value="100" /> </Trigger> <Trigger Property="IsEnabled" Value="False"> <Setter TargetName="TopNormal" Property="Visibility" Value="Hidden" /> <Setter TargetName="TopDisabled" Property="Visibility" Value="Visible" /> </Trigger> </ControlTemplate.Triggers> </ControlTemplate> </Button.Template> </Button> <!--</StackPanel>--> <!--<StackPanel>--> <Button Name="LeftSide" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" Canvas.Left="100" Canvas.Top="60" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" MouseDown="Button_MouseDown_1"> <Button.Template> <ControlTemplate TargetType="{x:Type Button}"> <Grid Width="34" Height="100"> <Canvas> <Image Name="TopLeftNormal" Source="Resources/topleft_off.jpg" Height="34" Width="34" Canvas.Left="0"></Image> <Image Name="TopLeftDown" Opacity="0" Source="Resources/topleft_leftdown.jpg" Height="34" Width="34" Canvas.Left="0" Visibility="Hidden" ></Image> <Image Name="Normal" Source="Resources/leftcenter_off.jpg" Height="34" Width="34" Canvas.Top="32" Canvas.Left="0"/> <Image Name="Pressed" Source="Resources/leftcenter_down.jpg" Visibility="Hidden" Canvas.Top="32" Height="34" Width="34" /> <Image Name="Disabled" Source="Resources/leftcenter_off.jpg" Visibility="Hidden" Height="34" Width="34" Canvas.Top="32" /> </Canvas> </Grid> <ControlTemplate.Triggers> <Trigger Property="IsPressed" Value="True"> <Setter TargetName="Normal" Property="Visibility" Value="Hidden" /> <Setter TargetName="Pressed" Property="Visibility" Value="Visible" /> <Setter TargetName="TopLeftNormal" Property="Visibility" Value="Hidden" /> <Setter TargetName="TopLeftNormal" Property="Opacity" Value="0" /> <Setter TargetName="TopLeftDown" Property="Visibility" Value="Visible" /> <Setter TargetName="TopLeftDown" Property="Opacity" Value="100" /> </Trigger> <Trigger Property="IsEnabled" Value="False"> <Setter TargetName="Normal" Property="Visibility" Value="Hidden" /> <Setter TargetName="Disabled" Property="Visibility" Value="Visible" /> </Trigger> </ControlTemplate.Triggers> </ControlTemplate> </Button.Template> </Button> <!--</StackPanel>--> <!--<StackPanel>--> <Button xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" Height="34" Width="34" Canvas.Left="165" Canvas.Top="92" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" MouseDown="Button_MouseDown_2" > <Button.Template> <ControlTemplate TargetType="{x:Type Button}"> <Grid> <Image Name="Normal" Source="Resources/rightcenter_off.jpg" /> <Image Name="Pressed" Source="Resources/rightcenter_down.jpg" Visibility="Hidden" /> <Image Name="Disabled" Source="Resources/rightcenter_off.jpg" Visibility="Hidden" /> </Grid> <ControlTemplate.Triggers> <Trigger Property="IsPressed" Value="True"> <Setter TargetName="Normal" Property="Visibility" Value="Hidden" /> <Setter TargetName="Pressed" Property="Visibility" Value="Visible" /> </Trigger> <Trigger Property="IsEnabled" Value="False"> <Setter TargetName="Normal" Property="Visibility" Value="Hidden" /> <Setter TargetName="Disabled" Property="Visibility" Value="Visible" /> </Trigger> </ControlTemplate.Triggers> </ControlTemplate> </Button.Template> </Button> <!--</StackPanel>--> <!--<StackPanel>--> <Button xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" Height="34" Width="34" Canvas.Left="133" Canvas.Top="124" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" > <Button.Template> <ControlTemplate TargetType="{x:Type Button}"> <Grid> <Image Name="BottomNormal" Source="Resources/bottomcenter_off.jpg" /> <Image Name="BottomPressed" Source="Resources/bottomcenter_down.jpg" Visibility="Hidden" /> <Image Name="BottomDisabled" Source="Resources/bottomcenter_off.jpg" Visibility="Hidden" /> </Grid> <ControlTemplate.Triggers> <Trigger Property="IsPressed" Value="True"> <Setter TargetName="BottomNormal" Property="Visibility" Value="Hidden" /> <Setter TargetName="BottomPressed" Property="Visibility" Value="Visible" /> </Trigger> <Trigger Property="IsEnabled" Value="False"> <Setter TargetName="BottomNormal" Property="Visibility" Value="Hidden" /> <Setter TargetName="BottomDisabled" Property="Visibility" Value="Visible" /> </Trigger> </ControlTemplate.Triggers> </ControlTemplate> </Button.Template> </Button> <!--</StackPanel>--> <Image Source="Resources/bottomright_off.jpg" Height="34" Width="34" Canvas.Left="165" Canvas.Top="124"></Image> <Image Source="Resources/bottomleft_off.jpg" Height="34" Width="34" Canvas.Left="100" Canvas.Top="124"></Image> <!--<ToggleButton Style="{StaticResource MyToggleButtonStyle}" Height="34" Width="34" Margin="150,100"/>--> </Canvas> </Grid>

    Read the article

  • Simulating a 2-level If-Else using RewriteCond

    - by hlissner
    Hi! I'm trying to get my head around RewriteCond, and want to rewrite any requests either to a static html page (if it exists), or to a specific index.php (so long as the requested file doesn't exist). To illustrate the logic: if HTTP_HOST is '(www\.)?mydomain.com' if file exists: "/default/static/{REQUEST_URI}.html", then rewrite .* to /default/static/{REQUEST_URI}.html else if file exists: {REQUEST_FILENAME}, then do not rewrite else rewrite .* to /default/index.php I don't seem to have much trouble doing it when I don't need to test for the HTTP_HOST. Ultimately, this one .htaccess file will be handling requests for several domains. I know I could get around this with vhosts, but I'd like to figure out how to do it this way. Here's where I am at now: RewriteCond %{HTTP_HOST} ^(www\.)?mydomain\.com$ [NC] RewriteCond /default/static/%{REQUEST_URI}.html -f RewriteRule . /default/static/%{REQUEST_URI}.html [L,NC] RewriteCond %{REQUEST_FILENAME} !-f RewriteRule . /default/index.php [L,QSA] I'm not too familiar with some of the other flags, will any of them be of use here (like chain|C, next|N or skip|S)? Thanks in advance! UPDATE: I've managed to do it, but would appreciate alternatives: RewriteEngine On RewriteRule ^(.+)/$ /$1 [L] RewriteCond %{HTTP_HOST} ^(domainA|domainB)\.com [NC] RewriteCond %{DOCUMENT_ROOT}/%1/static/%{REQUEST_URI}.html -f RewriteRule (.*)? /%1/static/$1.html [NC,L] RewriteCond %{HTTP_HOST} ^(domainA|domainB)\.com [NC] RewriteCond %{REQUEST_FILENAME} !-f RewriteRule .* /%1/index.php [L,QSA]

    Read the article

  • Using placeholders/variables in a sed command

    - by jesse_galley
    I want to store a specific part of a matched result as a variable to be used for replacement later. I would like to keep this in a one liner instead of finding the variable I need before hand. when configuring apache, and use mod_rewrite, you can specificy specific parts of patterns to be used as variables,like this: RewriteRule ^www.example.com/page/(.*)$ http://www.example.com/page.php?page=$1 [R=301,L] the part of the pattern match that's contained inside the parenthesis is stored as $1 for use later. So if the url was www.example.com/page/home, it would be replaced with www.example.com/page.php?page=home. So the "home" part of the match was saved in $1 because it was the part of the pattern inside the parenthesis. I want something like this functionality with a sed command, I need to automatically replace many strings in a SQL dump file, to add drop table if exist commands before each create table, but I need to know the table name to do this, so if the dump file contains something like: ... CREATE TABLE `orders` ... I need to run something like: cat dump.sql | sed "s/CREATE TABLE `(.*)`/DROP TABLE IF EXISTS $1\N CREATE TABLE `$1`/g" to get the result of: ... DROP TABLE IF EXISTS `orders` CREATE TABLE `orders` ... I'm using the mod_rewrite syntax in the sed command as a logical example of what I'm trying to do. Any suggestions?

    Read the article

  • MySQL Casting in C#

    - by user798080
    Okay, so I'm attempting to print out the contents of a table in a comma-separated file. using (OdbcCommand com = new OdbcCommand("SELECT * FROM pie_data WHERE Pie_ID = ?", con)) { com.Parameters.AddWithValue("", Request.Form["pie_id"]); com.ExecuteNonQuery(); using (OdbcDataReader reader = com.ExecuteReader()) { string finalstring = ""; while (reader.Read()) { finalstring = reader.GetString(9) + ","; for (int i = 0; i <= 8; i = i + 1) { finalstring = finalstring + reader.GetString(i) + ","; } } } Response.Write(finalstring); noredirect = 1; } My table layout is: CREATE TABLE `rent_data` ( `Pies` INT(10) UNSIGNED NOT NULL, `Name` VARCHAR(85) NOT NULL, `Email` VARCHAR(85) NOT NULL, `Pie_Rent` DATE NOT NULL, `Rent_To` DATE NOT NULL, `Returned_Date` DATE NULL DEFAULT NULL, `Place` VARCHAR(100) NOT NULL, `Purpose` MEDIUMTEXT NOT NULL, `Comments` MEDIUMTEXT NULL, `Pie_ID` SMALLINT(5) UNSIGNED ZEROFILL NOT NULL, INDEX `Pie_ID` (`Equipment_ID`) ) The error I'm getting is this: Exception Details: System.InvalidCastException: Unable to cast object of type 'System.Int64' to type 'System.String'. On the line: finalstring = finalstring + reader.GetString(i) + ",";

    Read the article

  • Mechanize form submit not going to the correct response page, no errors as to why. Something I'm doing?

    - by Zack Shapiro
    I threw this all in one controller for testing purposes. My code fills out the form correctly for adding a new address to your Amazon account. There are two buttons that submit this form, one takes you to add a new address which is what I don't want, and the other is just a Save & Continue input/image. When I submit the form via that button, as I do below, the form is still on the page, filled out as I have with my code. Inspecting the page titles, they're the same. There are no discernible errors that Mechanize or Amazon spit back. Any ideas? class AmazonCrawler def initialize @agent = Mechanize.new do |agent| agent.user_agent_alias = 'Mac Safari' agent.follow_meta_refresh = true agent.redirect_ok = true end end def login # login_url = "https://www.amazon.com/ap/signin?_encoding=UTF8&openid.assoc_handle=usflex&openid.claimed_id=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0%2Fidentifier_select&openid.identity=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0%2Fidentifier_select&openid.mode=checkid_setup&openid.ns=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0&openid.ns.pape=http%3A%2F%2Fspecs.openid.net%2Fextensions%2Fpape%2F1.0&openid.pape.max_auth_age=0&openid.return_to=https%3A%2F%2Fwww.amazon.com%2Fgp%2Fcss%2Fhomepage.html%3Fie%3DUTF8%26ref_%3Dgno_yam_ya" login_url = "https://www.amazon.com/gp/css/account/address/view.html?ie=UTF8&ref_=ya_add_address&viewID=newAddress" @agent.get(login_url) form = @agent.page.forms.first form.email = "whatever@gmail.com" form.radiobuttons.first.value = "0" form.radiobuttons.last.check form.password = "my_password" dashboard = @agent.submit(form) end end class UsersController < ApplicationController def index response = AmazonCrawler.new.login form = response.forms[1] # fill out form form.enterAddressFullName == "Your Name" form.enterAddressAddressLine1 = "123 Main Street" form.enterAddressAddressLine2 = "Apartment 34" form.enterAddressCity = "San Francisco" form.enterAddressStateOrRegion = "CA" form.enterAddressPostalCode = "94101" form.enterAddressPhoneNumber = "415-555-1212" form.AddressType = "RES" new_response = form.submit( form.button_with(value: /Save.*Continue/) ) end end

    Read the article

  • Query decendants in XML to Linq

    - by Gordon
    I have the following xml data: <portfolio> <item> <title>Site</title> <description>Site.com is a </description> <url>http://www.site.com</url> <photos> <photo url="http://www.site.com/site/thumbnail.png" thumbnail="true" description="Main" /> <photo url="http://www.site.com/site/1.png" thumbnail="false" description="Main" /> </photos> </item> </portfolio> In c# I am using the following link query: List list = new List(); XDocument xmlDoc = XDocument.Load(HttpContext.Current.Server.MapPath("~/app_data/portfolio.xml")); list = (from portfolio in xmlDoc.Descendants("item") select new PortfolioItem() { Title = portfolio.Element("title").Value, Description = portfolio.Element("description").Value, Url = portfolio.Element("url").Value }).ToList(); How do I go about querying the photos node? In the PortfolioItem class I have a property: List Photos {get;set;} Any ideas would be greatly appreciated!

    Read the article

  • Need Help on JavaScript to Trim

    - by MacpaLtd
    I am facing some problems with my server space. The images are using all the space from the server, making it slow. As it is an eCommerce website, it cannot be slow or we lose customers. If I have the following: SKU's : ABC123-001 > catName > Phone ABC753-851 > catName > MAC AT1233-098 > catName > PC How can I use trim to make it the following: SKU's : 123 > catName > Phone 753 > catName > MAC 1233 > catName > PC Which I would use in the following script: <script type="text/javascript"> $(function(){ var sku = $("#ProductBreadcrumb ul li:last").text(); $(".ProductThumbImage img").attr('src','http://img.example.com/images/'+catName+'/'+sku+'.jpg'); }); </script> So, basically, the output for the picture link would be: http://img.example.com/images/phone/123.jpg http://img.example.com/images/mac/753.jpg http://img.example.com/images/pc/1233.jpg So yeah, first problem I have to face is.. How can I trim it? I am not familiar with JavaScript so any help would be really appreciated :D

    Read the article

  • ESPN API - How can I retrieve college basketball conferences using the Teams API?

    - by nomad
    The support forums on ESPN.com recommend using Stack Overflow with the ESPN tag. That's why I'm here. I'm trying to obtain a list of all NCAA college basketball teams using ESPN's Teams API. I started with this GET request: http://api.espn.com/v1/sports/basketball/mens-college-basketball/teams?apikey=MY_API_KEY That gave me a list of teams, but many of them are missing. For example, there is no Nebraska. So then I thought that maybe I need to get a list of teams by conference. So I read this in the documentation: GROUPS: Allows for filtering by "group" or division, e.g. AL East, NFC South, etc. For group IDs and their corresponding values, make a request to http://developer.espn.com/v1/{resource}/leagues. Not applicable to golf and tennis. So then I try to make a request to `http://developer.espn.com/v1/basketball/mens-college-basketball/leagues?apikey=MY_API_KEY' and it says the page does not exist. Is this a bug or user error?

    Read the article

  • How can I pull multiple rows from a MySQL table and use all of them automatically for the same thing

    - by Rob
    Basically, I have multiple URL's stored in a MySQL table. I want to pull those URLs from the table and have cURL connect to all of them. Currently I've been storing the URL's in the local script, but I've added a new page that I can add and remove them from the database, and I'd like the page to reflect it appropriately. Here is what I currently have: $sites[0]['url'] = "http://example0.com "; $sites[1]['url'] = "http://example1.com"; $sites[2]['url'] = "http://example2.com"; $sites[3]['url'] = "http://example3.com"; foreach($sites as $s) { // Now for some cURL to run it. $ch = curl_init($s['url']); //load the urls and send GET data curl_setopt($ch, CURLOPT_TIMEOUT, 2); //No need to wait for it to load. Execute it and go. curl_exec($ch); //Execute curl_close($ch); //Close it off } Now I assume it can't be too amazingly difficult to do, I just don't know how. So if you could point me in the right direction, I'd be grateful. But if you supply me with some code, please comment it appropriately so that I can understand what each line is doing.

    Read the article

  • Comparing lists of field-hashes with equivalent AR-objects.

    - by Tim Snowhite
    I have a list of hashes, as such: incoming_links = [ {:title => 'blah1', :url => "http://blah.com/post/1"}, {:title => 'blah2', :url => "http://blah.com/post/2"}, {:title => 'blah3', :url => "http://blah.com/post/3"}] And an ActiveRecord model which has fields in the database with some matching rows, say: Link.all => [<Link#2 @title='blah2' @url='...post/2'>, <Link#3 @title='blah3' @url='...post/3'>, <Link#4 @title='blah4' @url='...post/4'>] I'd like to do set operations on Link.all with incoming_links so that I can figure out that <Link#4 ...> is not in the set of incoming_links, and {:title => 'blah1', :url =>'http://blah.com/post/1'} is not in the Link.all set, like so: #pseudocode #incoming_links = as above links = Link.all expired_links = links - incoming_links missing_links = incoming_links - links expired_links.destroy missing_links.each{|link| Link.create(link)} One route I've tried: I'd rather not rewrite Array#- and such, and I'm okay with converting incoming_links to a set of unsaved Link objects; so I've tried overwriting hash eql? and so on in Link so that it ignored the id equality that AR::Base provides by default. But this is the only place this sort of equality should be considered in the application - in other places the Link#id default identity is required. Is there some way I could subclass Link and apply the hash, eql?, etc overwriting there? The other route I've tried is to pull out the attributes hash for each Link and doing a .slice('id',...etc) to prune the hashes down. But this requires writing seperate methods for keeping track of the Link objects while doing set operations on the hashes, or writing seperate Collection classes to wrap the incoming_links hash-list and Link-list which seems a bit overkill. What is the best way to design this interaction? Extra credit for cleanliness.

    Read the article

  • Manually extracting portions of strings contained in a list (parsing)

    - by user1652011
    I'm aware that there are modules that fully simplify this function, but saying that I am running from a base install of python (standard modules only), how would I extract the following: I have a list. This list is the contents, line by line, of a webpage. Here is a mock up list (unformatted) for informative purposes: <script> link = "/scripts/playlists/1/" + a.id + "/0-5417069212.asx"; <script> "<a href="/apps/audio/?feedId=11065"><span class="px13">Eastern Metro Area Fire</span>" From the above string, I need the following extracted. The feedId (11065), which is incidentally a.id in the code above., "/scripts/playlists/1/" and "/0-5417069212.asx". Remembering that each of these lines is just contents from objects in a list, how would I go about extracting that data? Here is the full list: contents = urllib2.urlopen("http://www.radioreference.com/apps/audio/?ctid=5586") Pseudo: from urllib2 import urlopen as getpage page_contents = getpage("http://www.radioreference.com/apps/audio/?ctid=5586") feedID = % in (page_contents.search() for "/apps/audio/?feedId=%") titleID = % in (page_contents.search() for "<span class="px13">%</span>") playlistID = % in (page_contents.search() for "link = "%" + a.id + "*.asx";") asxID = * in (page_contents.search() for "link = "*" + a.id + "%.asx";") streamURL = "http://www.radioreference.com/" + playlistID + feedID + asxID + ".asx" I plan to format it as such that streamURL should = : http://www.radioreference.com/scripts/playlists/1/11065/0-5417067072.asx

    Read the article

  • ASP.NET configuration inheritance

    - by NowYouHaveTwoProblems
    I have an ASP.NET application that defines a custom configuration section in web.config. Recently I had a customer who wanted to deploy two instances of the application (for testing in addition to an existing production application). The configuration chosen by the customer was: foo.com - production application foo.com/Testing - test application In this case, the ASP.NET configuration engine decided to apply the settings at foo.com/web.config to foo.com/Testing/web.config. Thankfully this caused a configuration error because the section was redefined at the second level rather than giving the false impression that the two web applications were isolated. What I would like to do is to specify that my configuration section is not inherited and must be re-defined for any web application that requires it but I haven't been able to find a way to do this. My web.config ends up something like this <configuration> <configSections> <section name="MyApp" type="MyApp.ConfigurationSection"/> </configSections> <MyApp setting="value" /> <NestedSettingCollection> <Item key="SomeKey" value="SomeValue" /> <Item key="SomeOtherKey" value="SomeOtherValue" /> </NestedSettingCollection> </MyApp> </configuration>

    Read the article

  • Admob in xml not showing in Linear

    - by NoobMe
    i am implementing admob on my app it appears when the parent is in relative layout but i must not use the alignparentbottom so i am changing it to linear but it doesnt show when i change it to linear.. any tips? help? thanks in advance here it is in xml: <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="match_parent" android:layout_height="match_parent" android:orientation="vertical" > <RelativeLayout android:id="@+id/banner_holder" android:layout_width="match_parent" android:layout_height="wrap_content" > <ImageView android:id="@+id/offline_banner" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_centerInParent="true" android:background="@color/black" android:src="@drawable/offline_banner" /> <com.google.ads.AdView xmlns:ads="http://schemas.android.com/apk/lib/com.google.ads" android:id="@+id/adView" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_centerInParent="true" ads:adSize="SMART_BANNER" ads:adUnitId="@string/unit_id" ads:loadAdOnCreate="true" /> </RelativeLayout> <FrameLayout android:id="@+id/fragmentContainer" android:layout_width="match_parent" android:layout_height="wrap_content" /> </LinearLayout> i want the admob to be at the bottom part of the screen without using the alignparentbottom of relative layout thanks~

    Read the article

  • Change $mailTo variable based on select input value (array)

    - by Dirty Bird Design
    I have the following select list: <form action="mail.php" method="POST"> <select name="foo" id="foo"> <option value="sales">Sales</option> <option value="salesAssist">Sales Assist</option> <option value="billing">Billing</option> <option value="billingAssist">Billing Assist</option> </select> </form> I need to route the $mailTo variable depending on which option they select, Sales and Sales Assist go to sales@email.com, while Billing and Billing Assist go to billing@email.com PHP pseudeo code! <? php $_POST['foo'] if inArray(sales, salesAssist) foo="sales@email.com"; else if inArray(billing, billingAssist) foo="billing@email.com"; mailTo="foo" ?> I know there is nothing correct about the above, but you can see what I am trying to do, change a variable's value based on the selected value. I don't want to do this with JS, would rather learn more PHP here. Thank you.

    Read the article

  • Can ping IP address and nslookup hostname but cannot ping hostname

    - by jao
    On a windows 2003 server I can nslookup www.google.com which returns Server: localhost Address: 127.0.0.1 Non-authoritative answer: Name: www.l.google.com Addresses: 74.125.79.104, 74.125.79.147, 74.125.79.99 Aliases: www.google.com I can then ping 74.125.79.104: Pinging 74.125.79.104 with 32 bytes of data: Reply from 74.125.79.104: bytes=32 time=16ms TTL=54 Reply from 74.125.79.104: bytes=32 time=32ms TTL=54 Reply from 74.125.79.104: bytes=32 time=15ms TTL=54 Reply from 74.125.79.104: bytes=32 time=15ms TTL=54 Ping statistics for 74.125.79.104: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 15ms, Maximum = 32ms, Average = 19ms But I cannot ping www.google.com: Ping request could not find host www.google.com. Please check the name and try again. (this one is different from the other question in that this one has a TLD, it is not a local domain.) Update: I am running a dns server at localhost (127.0.0.1). Even when I change it to use for example opendns, it still can nslookup hostname and ping ip address, but not ping hostname. So what is wrong? Update 2: here is the ipconfig /all result: Windows IP Configuration Host Name . . . . . . . . . . . . : SERVER Primary Dns Suffix . . . . . . . : NETWORK.local Node Type . . . . . . . . . . . . : Unknown IP Routing Enabled. . . . . . . . : No WINS Proxy Enabled. . . . . . . . : No DNS Suffix Search List. . . . . . : NETWORK.local Ethernet adapter Local Area Connection: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Broadcom NetXtreme Gigabit Ethernet #2 Physical Address. . . . . . . . . : 00-0F-1F-56-3B-AA DHCP Enabled. . . . . . . . . . . : No IP Address. . . . . . . . . . . . : 192.168.7.2 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : 192.168.7.1 DNS Servers . . . . . . . . . . . : 127.0.0.1 Update 3: Thanks everyone for their help and suggestions. I appreciate that. Ipconfig /flushdns returns: Sucessfully flushed the DNS resolver cache Ipconfig /displaydns returns: 2.7.168.192.in-addr.arpa ---------------------------------------- Record Name . . . . . : 2.7.168.192.in-addr.arpa. Record Type . . . . . : 12 Time To Live . . . . : 0 Data Length . . . . . : 4 Section . . . . . . . : Answer PTR Record . . . . . : webserver.mydomainname.com 1.0.0.127.in-addr.arpa ---------------------------------------- Record Name . . . . . : 1.0.0.127.in-addr.arpa. Record Type . . . . . : 12 Time To Live . . . . : 0 Data Length . . . . . : 4 Section . . . . . . . : Answer PTR Record . . . . . : localhost Update 4: Wireshark shows the following: 3 11.540542 208.67.220.220 192.168.7.2 DNS Standard query response A 74.125.79.99 A 74.125.79.104 A 74.125.79.147 6 42.056794 192.168.7.2 192.168.7.255 NBNS Name query NB WWW.GOOGLE.COM<00> which is weird: when I ping, it sends a packet to 192.168.7.255 instead of asking the DNS server for an address

    Read the article

  • Find More Streaming TV Online with Clicker.tv

    - by DigitalGeekery
    Looking for a way to access more of your favorite TV Shows and other online entertainment? Today we’ll take a look at Clicker.tv which offers an awesome way to find tons of TV programs and movies. Clicker.tv Clicker.tv is an HTML5 web application that indexes both free and premium content from sources like Hulu, Netflix, Amazon, iTunes, and more. Some movies or episodes, such as those from Netflix and Amazon.com’s Video on Demand, will require viewers to have a membership, or pay a fee to access content. There is also a Clicker.tv app for Boxee.   Navigation Navigating in Clicker.tv is rather easy with your keyboard. Directional Keys: navigate up, down, left, and right. Enter: make a selection Backspace: return to previous screen Escape: return to the Clicker.tv home screen. Note: You can also navigate through Clicker.tv with your PC remote. Recommended Browsers Firefox 3.6 + Safari 4.0 + Internet Explorer 8 + Google Chrome Note: You’ll need the latest version of Flash installed to play the majority of content. Earlier versions of the above browsers may work, but for full keyboard functionality, stick with the recommendations. Using Clicker.tv The first time you go to Clicker.tv, (link below) you’ll be met with a welcome screen and some helpful hints. Click Enter when finished.   The Home screen feature Headliners, Trending Shows, and Trending Episodes. You can scroll through the different options and category links along the left side.   The Search link pulls up an onscreen keyboard so you can enter search terms with a remote as well as a keyboard. Type in your search terms and matching items are displayed on the screen.   You can also browse by a wide variety of categories. Select TV to browse only available TV programs. Or, browse only Movies in the movie category. There are also links for Web content and Music.   Creating an Account You can access all Clicker.tv content without an account, but a Clicker account allows users to create playlists and subscribe to shows and have them automatically added to their playlist. You’ll need to go to Clicker.com and create an account. You’ll find the link at the upper right of the page. Enter a username, password and email address. There also an option to link with Facebook, or you can simply Skip this step.   Go to Clicker.tv and sign in. You can manually type in your credentials or use the onscreen keyboard with your remote.   Settings If you’d prefer not to display content from premium sites or Netflix, you can remove them through the Settings. Toggle Amazon, iTunes and Netflix on or off.   Watching Episodes To watch an episode, select the image to begin playing from the default source, or select one of the other options. You can see in the example below that you can choose to watch the episode from Fox, Hulu, or Amazon Video on Demand.   Your episode will then launch and begin playing from your chosen source. If you choose a premium content source such as iTunes or Amazon’s VOD, you’ll be taken to the Amazon’s website or iTunes and prompted to purchase the content.   Playlists Once you’ve created an account and signed in, you can begin adding Shows to your playlist. Choose a series and select Add to Playlist.   You’ll see in the example below that Family Guy has been Added and the number 142 is shown next to the playlist icon to indicate that 142 episodes has been added to your playlist. Underneath the listings for each episode in your playlist you can mark as Watched, or Remove individual episodes.   You can also view the playlist or make any changes from the Clicker.com website. Click on “Playlist” on the top right of the Clicker.com site to access your playlists. You can select individual episodes from your playlists, remove them, or mark them as watched or unwatched. Clicker.TV and Boxee Boxee offers a Clicker.TV app that features a limited amount of the Clicker.TV content. You’ll find Clicker.TV located in the Boxee Apps Library. Select the Clicker App and then choose Start. From the Clicker App interface you can search or browse for available content. Select an episode you’d like to view… Then select play in the pop up window. You can also add it to your Boxee queue, share it, or add a shortcut, just as you can from other Boxee apps. When you click play your episode will launch and begin playing in Boxee. Conclusion Clicker.TV is currently still in Beta and has some limitations. Typical remotes won’t work completely in all external websites. So, you’ll still need a keyboard to be able to perform some operations such as switching to full screen mode. The Boxee app offers a more fully remote friendly environment, but unfortunately lacks a good portion of the Clicker.tv content. As with many content sites, availability of certain programming may be limited by your geographic location. Want to add Clicker.TV functionality to Windows Media Center? You can do so through the Boxee Integration for Windows 7 Media Center plug-in. Clicker.tv Clicker.com Similar Articles Productive Geek Tips Share Digital Media With Other Computers on a Home Network with Windows 7Stream Music and Video Over the Internet with Windows Media Player 12Listen to Online Radio with AntennaEnable Media Streaming in Windows Home Server to Windows Media PlayerNorton Internet Security 2010 [Review] TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Nice Websites To Watch TV Shows Online 24 Million Sites Windows Media Player Glass Icons (icons we like) How to Forecast Weather, without Gadgets Outlook Tools, one stop tweaking for any Outlook version Zoofs, find the most popular tweeted YouTube videos

    Read the article

  • Ops Center Solaris 11 IPS Repository Management: Using ISO Images

    - by S Stelting
    Please join us for a live WebEx presentation of this topic on Tuesday, November 20th at 9am MDT. Details for the call are provided below: https://oracleconferencing.webex.com/oracleconferencing/j.php?ED=209834017&UID=1512096072&PW=NYTVlZTYxMzdm&RT=MiMxMQ%3D%3D Meeting password: oracle123 Call-in toll-free number: 1-866-682-4770 International numbers: http://www.intercall.com/oracle/access_numbers.htm Conference Code: 762 9343 # Security Code: 7777 # With Enterprise Manager Ops Center 12c, you can provision, patch, monitor and manage Oracle Solaris 11 instances. To do this, Ops Center creates and maintains a Solaris 11 Image Packaging System (IPS) repository on the Enterprise Controller. During the Enterprise Controller configuration, you can load repository content directly from Oracle's Support Web site and subsequently synchronize the repository as new content becomes available. Of course, you can also use Solaris 11 ISO images to create and update your Ops Center repository. There are a few excellent reasons for doing this: You're running Ops Center in disconnected mode, and don't have Internet access on your Enterprise Controller You'd rather avoid the bandwidth associated with live synchronization of a Solaris 11 package repository This demo will show you how to use Solaris 11 ISO images to set up and update your Ops Center repository. Prerequisites This tip assumes that you've already installed the Enterprise Controller on a Solaris 11 OS instance and that you're ready for post-install configuration. In addition, there are specific Ops Center and OS version requirements depending on which version of Solaris 11 you plan to install.You can get full details about the requirements in the Release Notes for Ops Center 12c update 2. Additional information is available in the Ops Center update 2 Readme document. Part 1: Using a Solaris 11 ISO Image to Create an Ops Center Repository Step 1 – Download the Solaris 11 Repository Image The Oracle Web site provides a number of download links for official Solaris 11 images. Among those links is a two-part downloadable repository image, which provides repository content for Solaris 11 SPARC and X86 architectures. In this case, I used the Solaris 11 11/11 image. First, navigate to the Oracle Web site and accept the OTN License agreement: http://www.oracle.com/technetwork/server-storage/solaris11/downloads/index.html Next, download both parts of the Solaris 11 repository image. I recommend using the Solaris 11 11/11 image, and have provided the URLs here: http://download.oracle.com/otn/solaris/11/sol-11-1111-repo-full.iso-ahttp://download.oracle.com/otn/solaris/11/sol-11-1111-repo-full.iso-b Finally, use the cat command to generate an ISO image you can use to create your repository: # cat sol-11-1111-repo-full.iso-a sol-11-1111-repo-full.iso-b > sol-11-1111-repo-full.iso The process is very similar if you plan to set up a Solaris 11.1 release in Ops Center. In that case, navigate to the Solaris 11 download page, accept the license agreement and download both parts of the Solaris 11.1 repository image. Use the cat command to create a single ISO image for Solaris 11.1 Step 2 – Mount the Solaris 11 ISO Image in your Local Filesystem Once you have created the Solaris 11 ISO file, use the mount command to attach it to your local filesystem. After the image has been mounted, you can browse the repository from the ./repo subdirectory, and use the pkgrepo command to verify that Solaris 11 recognizes the content: Step 3 – Use the Image to Create your Ops Center Repository When you have confirmed the repository is available, you can use the image to create the Enterprise Controller repository. The operation will be slightly different depending on whether you configure Ops Center for Connected or Disconnected Mode operation.For connected mode operation, specify the mounted ./repo directory in step 4.1 of the configuration wizard, replacing the default Web-based URL. Since you're synchronizing from an OS repository image, you don't need to specify a key or certificate for the operation. For disconnected mode configuration, specify the Solaris 11 directory along with the path to the disconnected mode bundle downloaded by running the Ops Center harvester script: Ops Center will run a job to import package content from the mounted ISO image. A synchronization job can take several hours to run – in my case, the job ran for 3 hours, 22 minutes on a SunFire X4200 M2 server. During the job, Ops Center performs three important tasks: Synchronizes all content from the image and refreshes the repository Updates the IPS publisher information Creates OS Provisioning profiles and policies based on the content When the job is complete, you can unmount the ISO image from your Enterprise Controller. At that time, you can view the repository contents in your Ops Center Solaris 11 library. For the Solaris 11 11/11 release, you should see 8,668 packages and patches in the contents. You should also see default deployment plans for Solaris 11 provisioning. As part of the repository import, Ops Center generates plans and profiles for desktop, small and large servers for the SPARC and X86 architecture. Part 2: Using a Solaris 11 SRU to update an Ops Center Repository It's possible to use the same approach to upgrade your Ops Center repository to a Solaris 11 Support Repository Update, or SRU. Each SRU provides packages and updates to Solaris 11 - for example, SRU 8.5 provided the packaged for Oracle VM Server for SPARC 2.2 SRUs are available for download as ISO images from My Oracle Support, under document ID 1372094.1. The document provides download links for all SRUs which have been released by Oracle for Solaris 11. SRUs are cumulative, so later versions include the packages from earlier SRUs. After downloading an ISO image for an SRU, you can mount it to your local filesystem using a mount command similar to the one shown for Solaris 11 11/11. When the ISO image is mounted to the file system, you can perform the Add Content action from the Solaris 11 Library to synchronize packages and patches from the mounted image. I used the same mount point, so the repository URL was file://mnt/repo once again: After the synchronization of an SRU is complete, you can verify its content in the Solaris 11 library using the search function. The version pattern is 0.175.0.#, where the # is the same value as the SRU. In this example, I upgraded to SRU 1. The update job ran in just under 8 minutes, and a quick search shows that 22 software components were added to the repository: It's also possible to search for "Support Repository Update" to confirm the SRU was successfully added to the repository. Details on any of the update content are available by clicking the "View Details" button under the Packages/Patches entry.

    Read the article

  • SQL SERVER – Extending SQL Azure with Azure worker role – Guest Post by Paras Doshi

    - by pinaldave
    This is guest post by Paras Doshi. Paras Doshi is a research Intern at SolidQ.com and a Microsoft student partner. He is currently working in the domain of SQL Azure. SQL Azure is nothing but a SQL server in the cloud. SQL Azure provides benefits such as on demand rapid provisioning, cost-effective scalability, high availability and reduced management overhead. To see an introduction on SQL Azure, check out the post by Pinal here In this article, we are going to discuss how to extend SQL Azure with the Azure worker role. In other words, we will attempt to write a custom code and host it in the Azure worker role; the aim is to add some features that are not available with SQL Azure currently or features that need to be customized for flexibility. This way we extend the SQL Azure capability by building some solutions that run on Azure as worker roles. To understand Azure worker role, think of it as a windows service in cloud. Azure worker role can perform background processes, and to handle processes such as synchronization and backup, it becomes our ideal tool. First, we will focus on writing a worker role code that synchronizes SQL Azure databases. Before we do so, let’s see some scenarios in which synchronization between SQL Azure databases is beneficial: scaling out access over multiple databases enables us to handle workload efficiently As of now, SQL Azure database can be hosted in one of any six datacenters. By synchronizing databases located in different data centers, one can extend the data by enabling access to geographically distributed data Let us see some scenarios in which SQL server to SQL Azure database synchronization is beneficial To backup SQL Azure database on local infrastructure Rather than investing in local infrastructure for increased workloads, such workloads could be handled by cloud Ability to extend data to different datacenters located across the world to enable efficient data access from remote locations Now, let us develop cloud-based app that synchronizes SQL Azure databases. For an Introduction to developing cloud based apps, click here Now, in this article, I aim to provide a bird’s eye view of how a code that synchronizes SQL Azure databases look like and then list resources that can help you develop the solution from scratch. Now, if you newly add a worker role to the cloud-based project, this is how the code will look like. (Note: I have added comments to the skeleton code to point out the modifications that will be required in the code to carry out the SQL Azure synchronization. Note the placement of Setup() and Sync() function.) Click here (http://parasdoshi1989.files.wordpress.com/2011/06/code-snippet-1-for-extending-sql-azure-with-azure-worker-role1.pdf ) Enabling SQL Azure databases synchronization through sync framework is a two-step process. In the first step, the database is provisioned and sync framework creates tracking tables, stored procedures, triggers, and tables to store metadata to enable synchronization. This is one time step. The code for the same is put in the setup() function which is called once when the worker role starts. Now, the second step is continuous (or on demand) synchronization of SQL Azure databases by propagating changes between databases. This is done on a continuous basis by calling the sync() function in the while loop. The code logic to synchronize changes between SQL Azure databases should be put in the sync() function. Discussing the coding part step by step is out of the scope of this article. Therefore, let me suggest you a resource, which is given here. Also, note that before you start developing the code, you will need to install SYNC framework 2.1 SDK (download here). Further, you will reference some libraries before you start coding. Details regarding the same are available in the article that I just pointed to. You will be charged for data transfers if the databases are not in the same datacenter. For pricing information, go here Currently, a tool named DATA SYNC, which is built on top of sync framework, is available in CTP that allows SQL Azure <-> SQL server and SQL Azure <-> SQL Azure synchronization (without writing single line of code); however, in some cases, the custom code shown in this blogpost provides flexibility that is not available with Data SYNC. For instance, filtering is not supported in the SQL Azure DATA SYNC CTP2; if you wish to have such a functionality now, then you have the option of developing a custom code using SYNC Framework. Now, this code can be easily extended to synchronize at some schedule. Let us say we want the databases to get synchronized every day at 10:00 pm. This is what the code will look like now: (http://parasdoshi1989.files.wordpress.com/2011/06/code-snippet-2-for-extending-sql-azure-with-azure-worker-role.pdf) Don’t you think that by writing such a code, we are imitating the functionality provided by the SQL server agent for a SQL server? Think about it. We are scheduling our administrative task by writing custom code – in other words, we have developed a “Light weight SQL server agent for SQL Azure!” Since the SQL server agent is not currently available in cloud, we have developed a solution that enables us to schedule tasks, and thus we have extended SQL Azure with the Azure worker role! Now if you wish to track jobs, you can do so by storing this data in SQL Azure (or Azure tables). The reason is that Windows Azure is a stateless platform, and we will need to store the state of the job ourselves and the choice that you have is SQL Azure or Azure tables. Note that this solution requires custom code and also it is not UI driven; however, for now, it can act as a temporary solution until SQL server agent is made available in the cloud. Moreover, this solution does not encompass functionalities that a SQL server agent provides, but it does open up an interesting avenue to schedule some of the tasks such as backup and synchronization of SQL Azure databases by writing some custom code in the Azure worker role. Now, let us see one more possibility – i.e., running BCP through a worker role in Azure-hosted services and then uploading the backup files either locally or on blobs. If you upload it locally, then consider the data transfer cost. If you upload it to blobs residing in the same datacenter, then no transfer cost applies but the cost on blob size applies. So, before choosing the option, you need to evaluate your preferences keeping the cost associated with each option in mind. In this article, I have shown that Azure worker role solution could be developed to synchronize SQL Azure databases. Moreover, a light-weight SQL server agent for SQL Azure can be developed. Also we discussed the possibility of running BCP through a worker role in Azure-hosted services for backing up our precious SQL Azure data. Thus, we can extend SQL Azure with the Azure worker role. But remember: you will be charged for running Azure worker roles. So at the end of the day, you need to ask – am I willing to build a custom code and pay money to achieve this functionality? I hope you found this blog post interesting. If you have any questions/feedback, you can comment below or you can mail me at Paras[at]student-partners[dot]com Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Azure, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL – Migrate Database from SQL Server to NuoDB – A Quick Tutorial

    - by Pinal Dave
    Data is growing exponentially and every organization with growing data is thinking of next big innovation in the world of Big Data. Big data is a indeed a future for every organization at one point of the time. Just like every other next big thing, big data has its own challenges and issues. The biggest challenge associated with the big data is to find the ideal platform which supports the scalability and growth of the data. If you are a regular reader of this blog, you must be familiar with NuoDB. I have been working with NuoDB for a while and their recent release is the best thus far. NuoDB is an elastically scalable SQL database that can run on local host, datacenter and cloud-based resources. A key feature of the product is that it does not require sharding (read more here). Last week, I was able to install NuoDB in less than 90 seconds and have explored their Explorer and Admin sections. You can read about my experiences in these posts: SQL – Step by Step Guide to Download and Install NuoDB – Getting Started with NuoDB SQL – Quick Start with Admin Sections of NuoDB – Manage NuoDB Database SQL – Quick Start with Explorer Sections of NuoDB – Query NuoDB Database Many SQL Authority readers have been following me in my journey to evaluate NuoDB. One of the frequently asked questions I’ve received from you is if there is any way to migrate data from SQL Server to NuoDB. The fact is that there is indeed a way to do so and NuoDB provides a fantastic tool which can help users to do it. NuoDB Migrator is a command line utility that supports the migration of Microsoft SQL Server, MySQL, Oracle, and PostgreSQL schemas and data to NuoDB. The migration to NuoDB is a three-step process: NuoDB Migrator generates a schema for a target NuoDB database It loads data into the target NuoDB database It dumps data from the source database Let’s see how we can migrate our data from SQL Server to NuoDB using a simple three-step approach. But before we do that we will create a sample database in MSSQL and later we will migrate the same database to NuoDB: Setup Step 1: Build a sample data CREATE DATABASE [Test]; CREATE TABLE [Department]( [DepartmentID] [smallint] NOT NULL, [Name] VARCHAR(100) NOT NULL, [GroupName] VARCHAR(100) NOT NULL, [ModifiedDate] [datetime] NOT NULL, CONSTRAINT [PK_Department_DepartmentID] PRIMARY KEY CLUSTERED ( [DepartmentID] ASC ) ) ON [PRIMARY]; INSERT INTO Department SELECT * FROM AdventureWorks2012.HumanResources.Department; Note that I am using the SQL Server AdventureWorks database to build this sample table but you can build this sample table any way you prefer. Setup Step 2: Install Java 64 bit Before you can begin the migration process to NuoDB, make sure you have 64-bit Java installed on your computer. This is due to the fact that the NuoDB Migrator tool is built in Java. You can download 64-bit Java for Windows, Mac OSX, or Linux from the following link: http://java.com/en/download/manual.jsp. One more thing to remember is that you make sure that the path in your environment settings is set to your JAVA_HOME directory or else the tool will not work. Here is how you can do it: Go to My Computer >> Right Click >> Select Properties >> Click on Advanced System Settings >> Click on Environment Variables >> Click on New and enter the following values. Variable Name: JAVA_HOME Variable Value: C:\Program Files\Java\jre7 Make sure you enter your Java installation directory in the Variable Value field. Setup Step 3: Install JDBC driver for SQL Server. There are two JDBC drivers available for SQL Server.  Select the one you prefer to use by following one of the two links below: Microsoft JDBC Driver jTDS JDBC Driver In this example we will be using jTDS JDBC driver. Once you download the driver, move the driver to your NuoDB installation folder. In my case, I have moved the JAR file of the driver into the C:\Program Files\NuoDB\tools\migrator\jar folder as this is my NuoDB installation directory. Now we are all set to start the three-step migration process from SQL Server to NuoDB: Migration Step 1: NuoDB Schema Generation Here is the command I use to generate a schema of my SQL Server Database in NuoDB. First I go to the folder C:\Program Files\NuoDB\tools\migrator\bin and execute the nuodb-migrator.bat file. Note that my database name is ‘test’. Additionally my username and password is also ‘test’. You can see that my SQL Server database is running on my localhost on port 1433. Additionally, the schema of the table is ‘dbo’. nuodb-migrator schema –source.driver=net.sourceforge.jtds.jdbc.Driver –source.url=jdbc:jtds:sqlserver://localhost:1433/ –source.username=test –source.password=test –source.catalog=test –source.schema=dbo –output.path=/tmp/schema.sql The above script will generate a schema of all my SQL Server tables and will put it in the folder C:\tmp\schema.sql . You can open the schema.sql file and execute this file directly in your NuoDB instance. You can follow the link here to see how you can execute the SQL script in NuoDB. Please note that if you have not yet created the schema in the NuoDB database, you should create it before executing this step. Step 2: Generate the Dump File of the Data Once you have recreated your schema in NuoDB from SQL Server, the next step is very easy. Here we create a CSV format dump file, which will contain all the data from all the tables from the SQL Server database. The command to do so is very similar to the above command. Be aware that this step may take a bit of time based on your database size. nuodb-migrator dump –source.driver=net.sourceforge.jtds.jdbc.Driver –source.url=jdbc:jtds:sqlserver://localhost:1433/ –source.username=test –source.password=test –source.catalog=test –source.schema=dbo –output.type=csv –output.path=/tmp/dump.cat Once the above command is successfully executed you can find your CSV file in the C:\tmp\ folder. However, you do not have to do anything manually. The third and final step will take care of completing the migration process. Migration Step 3: Load the Data into NuoDB After building schema and taking a dump of the data, the very next step is essential and crucial. It will take the CSV file and load it into the NuoDB database. nuodb-migrator load –target.url=jdbc:com.nuodb://localhost:48004/mytest –target.schema=dbo –target.username=test –target.password=test –input.path=/tmp/dump.cat Please note that in the above script we are now targeting the NuoDB database, which we have already created with the name of “MyTest”. If the database does not exist, create it manually before executing the above script. I have kept the username and password as “test”, but please make sure that you create a more secure password for your database for security reasons. Voila!  You’re Done That’s it. You are done. It took 3 setup and 3 migration steps to migrate your SQL Server database to NuoDB.  You can now start exploring the database and build excellent, scale-out applications. In this blog post, I have done my best to come up with simple and easy process, which you can follow to migrate your app from SQL Server to NuoDB. Download NuoDB I strongly encourage you to download NuoDB and go through my 3-step migration tutorial from SQL Server to NuoDB. Additionally here are two very important blog post from NuoDB CTO Seth Proctor. He has written excellent blog posts on the concept of the Administrative Domains. NuoDB has this concept of an Administrative Domain, which is a collection of hosts that can run one or multiple databases.  Each database has its own TEs and SMs, but all are managed within the Admin Console for that particular domain. http://www.nuodb.com/techblog/2013/03/11/getting-started-provisioning-a-domain/ http://www.nuodb.com/techblog/2013/03/14/getting-started-running-a-database/ Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: NuoDB

    Read the article

  • Clarity is important, both in question and in answer.

    - by gerrylowry
    clarity is important ... i'm often reminded of the Clouseau movie in which Peter Sellers as Chief Inspector Clouseau asks a hotel clerk "Does your dog bite?" ... the clerk answers "no" ... after Clouseau has been bitten by the dog, he looks at the hotel clerk who says "That's not my dog".  Clarity is important, both in question and in answer. i've been a member of forums.asp.net since 2008 ... like many of my peers at forums.asp.net, i've answered my fair share of questions. FWIW, the purpose of this, my first web log post to http://weblogs.asp.net/gerrylowry is to help new members ask better questions and in turn get better answers. TIMTOWTDI  =.  there is more than one way to do it imho, the best way to ask a question in any forum, or even person to person, is to first formulate your question and then ask yourself to answer your own question. Things to consider when asking (the more complete your question, the more likely you'll get the answer you require): -- have you searched Google and/or your favourite search engine(s) before posting your question to forums.asp.net; examples: site:msdn.microsoft.com entity framework 5.0 c#http://lmgtfy.com/?q=site%3Amsdn.microsoft.com+entity+framework+5.0+c%23 site:forums.asp.net MVC tutorial c#http://lmgtfy.com/?q=site%3Aforums.asp.net+MVC+tutorial+c%23 -- are you asking your question in the correct forum?  look at the forums' descriptions at http://forums.asp.net/; examples: Getting Started If you have a general ASP.NET question on a topic that's not covered by one of the other more specific forums - ask it here. MVC Discussions regarding ASP.NET Model-View-Controller (MVC) C# Questions about using C# for ASP.NET development Note:  if your question pertains more to c# than to MVC, choosing the C# forum is likely to be more appropriate. -- is your post subject clear and concise, yet not too vague? compare these three subjects (all three had something to do with GridView):     (1)    please help     (2)    gridview      (3)    How to show newline in GridView  -- have you clearly explained your scenario? compare:  my leg hurts   with   when i walk too much, my right knee hurts in the knee joint  compare:  my code does not work    with    when i enter a date as 2012-11-8, i get a FormatException -- have you checked your spelling, your grammar, and your English? for better or worse, English is the language of forums.asp.net ... many of the currently 170000++ forums.asp.net are not native speakers of English; that's okay ... however, there are times when choosing the more appropriate words will likely get one a better answer; fortunately, there are web tools to help you formulate your question, for example, http://translate.google.com/.  -- have you provided relevant information about your environment? here are a few examples ... feel free to include other items to your question ... rule of thumb:  if you think a given detail is relevant, it likely is -- what technology are you using?    ASP.NET MVC 4, ASP.NET MVC 3, WebForms, ...  -- what version of Visual Studio are you using?  vs2012 (ultimate, professional, express), vs2010, vs2008 ... -- are you hosting your own website?  are you using a shared hosting service? -- are you experience difficulties in just one browser? more than one browser? -- what browser version(s) are you using?   ie8? ie9? ... -- what is your operating system?     win8, win7, vista, XP, server 2008 R2 ... -- what is your database?   SQL Server 2008 R2, ss2005, MySQL, Oracle, ... -- what is your web server?  iis 7.5, iis 6, .... -- have you provided enough information for someone to be able to answer your question? Here's an actual example from an O.P. that i hope is self-explanatory: I'm trying to make a simple calculator when i write the code in windows application it worked when i tried it in web application it doesn't work and there are no errors what should i do ??!! -- have you included unnecessary information? more than once, i've seen the O.P. (original post, original poster) include many extra lines of code that were not relevant to the actual question; the more unnecessary code that you include, the less likely your volunteer peers will be motivated to donate their time to help you. -- have you asked the question that you want answered? "Does this dog bite?" -- are your expectations reasonable? -- generally, persons who are going to answer your questions are your peers ... they are unpaid volunteers ... -- are you looking for help with your homework, work assignment, or hobby? or, are you expecting someone else to do your work for you?  -- do you expect a complete solution or are you simply looking for guidance and direction? -- you are likely to get more help by first making a reasonable effort to help yourself first Clarity is important, both in question and in answer. if you are answering someone else's question, please remember that clear answers are just as important as clear questions; would you understand your own answer? Things to consider when answering: -- have you tested your code example?  if you have, say so; if you've not tested your code example, also say so -- imho, it's okay to guess as long as you clearly state that you're guessing ... sometimes a wrong guess can still help the O.P. find her/his way to the right answer -- meanness does not contribute to being helpful; sometimes one may become frustrated with the O.P. and/or others participating in a thread, if that happens to you, be kind regardless; speaking from my own experience, at least once i've allowed myself to be frustrated into writing something inappropriate that i've regretted later ... being a meany does not feel good ... being kind and helpful feels fantastic! Tip:  before asking your question, read more than a few existing questions and answers to get a sense of how your peers ask and answer questions. Gerry P.S.:  try to avoid necroposting and piggy backing. necroposting is adding to an old post, especially one that was resolved months ago. piggy backing is adding your own question to someone else's thread.

    Read the article

  • CI Deployment Of Azure Web Roles Using TeamCity

    - by srkirkland
    After recently migrating an important new website to use Windows Azure “Web Roles” I wanted an easier way to deploy new versions to the Azure Staging environment as well as a reliable process to rollback deployments to a certain “known good” source control commit checkpoint.  By configuring our JetBrains’ TeamCity CI server to utilize Windows Azure PowerShell cmdlets to create new automated deployments, I’ll show you how to take control of your Azure publish process. Step 0: Configuring your Azure Project in Visual Studio Before we can start looking at automating the deployment, we should make sure manual deployments from Visual Studio are working properly.  Detailed information for setting up deployments can be found at http://msdn.microsoft.com/en-us/library/windowsazure/ff683672.aspx#PublishAzure or by doing some quick Googling, but the basics are as follows: Install the prerequisite Windows Azure SDK Create an Azure project by right-clicking on your web project and choosing “Add Windows Azure Cloud Service Project” (or by manually adding that project type) Configure your Role and Service Configuration/Definition as desired Right-click on your azure project and choose “Publish,” create a publish profile, and push to your web role You don’t actually have to do step #4 and create a publish profile, but it’s a good exercise to make sure everything is working properly.  Once your Windows Azure project is setup correctly, we are ready to move on to understanding the Azure Publish process. Understanding the Azure Publish Process The actual Windows Azure project is fairly simple at its core—it builds your dependent roles (in our case, a web role) against a specific service and build configuration, and outputs two files: ServiceConfiguration.Cloud.cscfg: This is just the file containing your package configuration info, for example Instance Count, OsFamily, ConnectionString and other Setting information. ProjectName.Azure.cspkg: This is the package file that contains the guts of your deployment, including all deployable files. When you package your Azure project, these two files will be created within the directory ./[ProjectName].Azure/bin/[ConfigName]/app.publish/.  If you want to build your Azure Project from the command line, it’s as simple as calling MSBuild on the “Publish” target: msbuild.exe /target:Publish Windows Azure PowerShell Cmdlets The last pieces of the puzzle that make CI automation possible are the Azure PowerShell Cmdlets (http://msdn.microsoft.com/en-us/library/windowsazure/jj156055.aspx).  These cmdlets are what will let us create deployments without Visual Studio or other user intervention. Preparing TeamCity for Azure Deployments Now we are ready to get our TeamCity server setup so it can build and deploy Windows Azure projects, which we now know requires the Azure SDK and the Windows Azure PowerShell Cmdlets. Installing the Azure SDK is easy enough, just go to https://www.windowsazure.com/en-us/develop/net/ and click “Install” Once this SDK is installed, I recommend running a test build to make sure your project is building correctly.  You’ll want to setup your build step using MSBuild with the “Publish” target against your solution file.  Mine looks like this: Assuming the build was successful, you will now have the two *.cspkg and *cscfg files within your build directory.  If the build was red (failed), take a look at the build logs and keep an eye out for “unsupported project type” or other build errors, which will need to be addressed before the CI deployment can be completed. With a successful build we are now ready to install and configure the Windows Azure PowerShell Cmdlets: Follow the instructions at http://msdn.microsoft.com/en-us/library/windowsazure/jj554332 to install the Cmdlets and configure PowerShell After installing the Cmdlets, you’ll need to get your Azure Subscription Info using the Get-AzurePublishSettingsFile command. Store the resulting *.publishsettings file somewhere you can get to easily, like C:\TeamCity, because you will need to reference it later from your deploy script. Scripting the CI Deploy Process Now that the cmdlets are installed on our TeamCity server, we are ready to script the actual deployment using a TeamCity “PowerShell” build runner.  Before we look at any code, here’s a breakdown of our deployment algorithm: Setup your variables, including the location of the *.cspkg and *cscfg files produced in the earlier MSBuild step (remember, the folder is something like [ProjectName].Azure/bin/[ConfigName]/app.publish/ Import the Windows Azure PowerShell Cmdlets Import and set your Azure Subscription information (this is basically your authentication/authorization step, so protect your settings file Now look for a current deployment, and if you find one Upgrade it, else Create a new deployment Pretty simple and straightforward.  Now let’s look at the code (also available as a gist here: https://gist.github.com/3694398): $subscription = "[Your Subscription Name]" $service = "[Your Azure Service Name]" $slot = "staging" #staging or production $package = "[ProjectName]\bin\[BuildConfigName]\app.publish\[ProjectName].cspkg" $configuration = "[ProjectName]\bin\[BuildConfigName]\app.publish\ServiceConfiguration.Cloud.cscfg" $timeStampFormat = "g" $deploymentLabel = "ContinuousDeploy to $service v%build.number%"   Write-Output "Running Azure Imports" Import-Module "C:\Program Files (x86)\Microsoft SDKs\Windows Azure\PowerShell\Azure\*.psd1" Import-AzurePublishSettingsFile "C:\TeamCity\[PSFileName].publishsettings" Set-AzureSubscription -CurrentStorageAccount $service -SubscriptionName $subscription   function Publish(){ $deployment = Get-AzureDeployment -ServiceName $service -Slot $slot -ErrorVariable a -ErrorAction silentlycontinue   if ($a[0] -ne $null) { Write-Output "$(Get-Date -f $timeStampFormat) - No deployment is detected. Creating a new deployment. " } if ($deployment.Name -ne $null) { #Update deployment inplace (usually faster, cheaper, won't destroy VIP) Write-Output "$(Get-Date -f $timeStampFormat) - Deployment exists in $servicename. Upgrading deployment." UpgradeDeployment } else { CreateNewDeployment } }   function CreateNewDeployment() { write-progress -id 3 -activity "Creating New Deployment" -Status "In progress" Write-Output "$(Get-Date -f $timeStampFormat) - Creating New Deployment: In progress"   $opstat = New-AzureDeployment -Slot $slot -Package $package -Configuration $configuration -label $deploymentLabel -ServiceName $service   $completeDeployment = Get-AzureDeployment -ServiceName $service -Slot $slot $completeDeploymentID = $completeDeployment.deploymentid   write-progress -id 3 -activity "Creating New Deployment" -completed -Status "Complete" Write-Output "$(Get-Date -f $timeStampFormat) - Creating New Deployment: Complete, Deployment ID: $completeDeploymentID" }   function UpgradeDeployment() { write-progress -id 3 -activity "Upgrading Deployment" -Status "In progress" Write-Output "$(Get-Date -f $timeStampFormat) - Upgrading Deployment: In progress"   # perform Update-Deployment $setdeployment = Set-AzureDeployment -Upgrade -Slot $slot -Package $package -Configuration $configuration -label $deploymentLabel -ServiceName $service -Force   $completeDeployment = Get-AzureDeployment -ServiceName $service -Slot $slot $completeDeploymentID = $completeDeployment.deploymentid   write-progress -id 3 -activity "Upgrading Deployment" -completed -Status "Complete" Write-Output "$(Get-Date -f $timeStampFormat) - Upgrading Deployment: Complete, Deployment ID: $completeDeploymentID" }   Write-Output "Create Azure Deployment" Publish   Creating the TeamCity Build Step The only thing left is to create a second build step, after your MSBuild “Publish” step, with the build runner type “PowerShell”.  Then set your script to “Source Code,” the script execution mode to “Put script into PowerShell stdin with “-Command” arguments” and then copy/paste in the above script (replacing the placeholder sections with your values).  This should look like the following:   Wrap Up After combining the MSBuild /target:Publish step (which creates the necessary Windows Azure *.cspkg and *.cscfg files) and a PowerShell script step which utilizes the Azure PowerShell Cmdlets, we have a fully deployable build configuration in TeamCity.  You can configure this step to run whenever you’d like using build triggers – for example, you could even deploy whenever a new master branch deploy comes in and passes all required tests. In the script I’ve hardcoded that every deployment goes to the Staging environment on Azure, but you could deploy straight to Production if you want to, or even setup a deployment configuration variable and set it as desired. After your TeamCity Build Configuration is complete, you’ll see something that looks like this: Whenever you click the “Run” button, all of your code will be compiled, published, and deployed to Windows Azure! One additional enormous benefit of automating the process this way is that you can easily deploy any specific source control changeset by clicking the little ellipsis button next to "Run.”  This will bring up a dialog like the one below, where you can select the last change to use for your deployment.  Since Azure Web Role deployments don’t have any rollback functionality, this is a critical feature.   Enjoy!

    Read the article

< Previous Page | 746 747 748 749 750 751 752 753 754 755 756 757  | Next Page >