Search Results

Search found 3558 results on 143 pages for 'hosted'.

Page 102/143 | < Previous Page | 98 99 100 101 102 103 104 105 106 107 108 109  | Next Page >

  • SQL Community – stronger than ever

    - by Rob Farley
    I posted a few hours ago about a reflection of the Summit, but I wanted to write another one for this month’s T-SQL Tuesday, hosted by Chris Yates. In January of this year, Adam Jorgensen and I joked around in a video that was used for the SQL Server 2012 launch. We were asked about SQLFamily, and we said how we were like brothers – how we could drive each other crazy (the look he gave me as I patted his stomach was priceless), but that we’d still look out for each other, just like in a real family. And this is really true. Last week at the PASS Summit, there was a lot going on. I was busy as always, as were many others. People told me their good news, their awful news, and some whinged to me about other people who were driving them crazy. But throughout this, people in the SQL Server community genuinely want the best for each other. I’m sure there are exceptions, but I don’t see much of this. Australians aren’t big on cheering for each other. Neither are the English. I think we see it as an American thing. It could be easy for me to consider that the SQL Community that I see at the PASS Summit is mainly there because it’s a primarily American organisation. But when you speak to people like sponsors, or people involved in several types of communities, you quickly hear that it’s not just about that – that PASS has something special. It goes beyond cheering, it’s a strong desire to see each other succeed. I see MVPs feel disappointed for those people who don’t get awarded. I see Summit speakers concerned for those who missed out on the chance to speak. I see chapter leaders excited about the opportunity to help other chapters. And throughout, I see a gentleness and love for people that you rarely see outside the church (and sadly, many churches don’t have it either). Chris points out that the M-W dictionary defined community as “a unified body of individuals”, and I feel like this is true of the SQL Server community. It goes deeper though. It’s not just unity – and we’re most definitely different to each other – it’s more than that. We all want to see each other grow. We all want to pull ourselves up, to serve each other, and to grow PASS into something more than it is today. In that other post of mine I wrote a bit about Paul White’s experience at his first Summit. His missus wrote to me on Facebook saying that she welled up over it. But that emotion was nothing about what I wrote – it was about the reaction that the SQL Community had had to Paul. Be proud of it, my SQL brothers and sisters, and never lose it.

    Read the article

  • 0xC0017011 and other error messages - what is the error message text?

    Recently there was a bug raised against BIDS Helper which originated in my Expression Editor control. Thankfully the person that raised it kindly included a screenshot, so I had the error code (HRESULT 0xC0017011) and a stack trace that pointed the finger firmly at my control, but no error message text. The code itself looked fine so I searched on the error code but got no results. I’d expected to get a hit from Books Online with the Integration Services Error and Message Reference topic at the very least, but no joy. There is however a more accurate and definitive reference, namely the header file that defines all these codes dtsmsg.h which you can find at- C:\Program Files (x86)\Microsoft SQL Server\110\SDK\Include\dtsmsg.h Looking the code up in the header file gave me a much more useful error message. //////////////////////////////////////////////////////////////////////////// // The parameter is sensitive // // MessageId: DTS_E_SENSITIVEPARAMVALUENOTALLOWED // // MessageText: // // Accessing value of the parameter variable for the sensitive parameter "%1!s!" is not allowed. Verify that the variable is used properly and that it protects the sensitive information. // #define DTS_E_SENSITIVEPARAMVALUENOTALLOWED ((HRESULT)0xC0017011L) Unfortunately I’d forgotten all about this. By the time I had remembered about it, the person who raised the issue had managed to narrow it down to something to do with having  sensitive parameter. Putting that together with the error message I’d finally found, a quick poke around in the code and I found the new GetSensitiveValue method which seemed to do the trick. The HResult fields are also listed online but it only shows the short error message, and it doesn’t include that all so important HRESULT value itself. So let this be a lesson to you (and me!), if you need to check  SSIS error go straight to the horses mouth - dtsmsg.h. This is particularly true when working with early builds, or CTP releases when we expect the documentation to be a bit behind. There is also a programmatic approach to getting better SSIS error messages. I should to take another look at the error handling in the control, or the way it is hosted in BIDS Helper. I suspect that if I use an implementation of Microsoft.SqlServer.Dts.Runtime.Wrapper.IDTSInfoEvents100 I could catch the error itself and get the full error message text which I could then report back. This would obviously be a better user experience and also make it easier to diagnose any issues like this in the future. See ExprssionEvaluator.cs for an example of this in use in the Expression Editor control.

    Read the article

  • SQL Community – stronger than ever

    - by Rob Farley
    I posted a few hours ago about a reflection of the Summit, but I wanted to write another one for this month’s T-SQL Tuesday, hosted by Chris Yates. In January of this year, Adam Jorgensen and I joked around in a video that was used for the SQL Server 2012 launch. We were asked about SQLFamily, and we said how we were like brothers – how we could drive each other crazy (the look he gave me as I patted his stomach was priceless), but that we’d still look out for each other, just like in a real family. And this is really true. Last week at the PASS Summit, there was a lot going on. I was busy as always, as were many others. People told me their good news, their awful news, and some whinged to me about other people who were driving them crazy. But throughout this, people in the SQL Server community genuinely want the best for each other. I’m sure there are exceptions, but I don’t see much of this. Australians aren’t big on cheering for each other. Neither are the English. I think we see it as an American thing. It could be easy for me to consider that the SQL Community that I see at the PASS Summit is mainly there because it’s a primarily American organisation. But when you speak to people like sponsors, or people involved in several types of communities, you quickly hear that it’s not just about that – that PASS has something special. It goes beyond cheering, it’s a strong desire to see each other succeed. I see MVPs feel disappointed for those people who don’t get awarded. I see Summit speakers concerned for those who missed out on the chance to speak. I see chapter leaders excited about the opportunity to help other chapters. And throughout, I see a gentleness and love for people that you rarely see outside the church (and sadly, many churches don’t have it either). Chris points out that the M-W dictionary defined community as “a unified body of individuals”, and I feel like this is true of the SQL Server community. It goes deeper though. It’s not just unity – and we’re most definitely different to each other – it’s more than that. We all want to see each other grow. We all want to pull ourselves up, to serve each other, and to grow PASS into something more than it is today. In that other post of mine I wrote a bit about Paul White’s experience at his first Summit. His missus wrote to me on Facebook saying that she welled up over it. But that emotion was nothing about what I wrote – it was about the reaction that the SQL Community had had to Paul. Be proud of it, my SQL brothers and sisters, and never lose it.

    Read the article

  • Newsletter sent with drupal goes to Spam Folder [closed]

    - by HerrSerker
    Possible Duplicate: How could I prevent my mail from being recognized as spam? I'm sending a newsletter with drupals simplenews module The website is hosted on an 1und1 server in germany (as seen in in header domains online.de and kundenserver.de) When I send it, it goes to Spam folder in Yahoo & GMail Mailbox, but not in Spam Folder in web.de, hotmail and GMX Mailboxes Here is, what I have in the Mail Header (for yahoo in this example) Received: from 12.345.678.90 (EHLO sXXXXXXXXX.online.de) (12.345.678.90) by mtaXXX.mail.kks.yahoo.co.jp with SMTP; Fri, 15 Jun 2012 18:45:24 +0900 Received: from [127.0.0.1] (helo=infongdXXXXX.rtr.kundenserver.de) by sXXXXXXXXX.online.de with esmtp (Exim 4.72) (envelope-from <[email protected]>) id 1SfT5k-00068r-Q8 for [email protected]; Fri, 15 Jun 2012 11:45:20 +0200 Received: from 83.136.130.41 (IP may be forged by CGI script) by infongdXXXXX.rtr.kundenserver.de with HTTP id 0Z04SW-1SQTKp3LPr-00YxYk; Fri, 15 Jun 2012 11:45:20 +0200 From: SENDER <[email protected]> To: "[email protected]" <[email protected]> Date: Fri, 15 Jun 2012 11:45:20 +0200 Subject: This is the subject of the newsletter Thread-Topic: This is the subject of the newsletter Thread-Index: Ac1K3nT42juzo7uCSkq5dTlby1ZvpQ== List-Unsubscribe: <http://www.example.com/newsletter/confirm/remove/XXXXXXXXX> X-MS-Has-Attach: X-Auto-Response-Suppress: All X-MS-TNEF-Correlator: x-originating-ip: [12.345.678.90] authentication-results: mtaXXX.mail.kks.yahoo.co.jp from=example.com; domainkeys=neutral (no sig); dkim=neutral (no sig) [email protected] errors-to: "SENDER" <[email protected]> received-spf: none (sXXXXXXXXX.online.de: domain of [email protected] does not designate permitted sender hosts) x-apparently-to: [email protected] via 123.45.67.890; Fri, 15 Jun 2012 18:45:25 +0900 x-sender-info: <[email protected]> content-length: 13762 Content-Type: multipart/alternative; boundary="_000_7471797868716571796675707173696675806577726778666766687_" MIME-Version: 1.0 I cannot see any direct spam filter message in this. But I'm kind of stunned by the Received: from 83.136.130.41 (IP may be forged by CGI script) part. After I searched a bit, it seems, that this is a special 'feature' of 1und1 Mail servers. Here are my questions: Is it possible that, if I get rid of the 'Ip maybe forged' part, that the Mail is not regarded as spam anymore? If so, Does anyone know, how I can get rid of it in drupal?

    Read the article

  • Sharing one static ip for both ftp and www service

    - by user11496
    Trying to figure out how to update the Zone record and configure webserver so that one application on the webserver is accessible by public. I'm completely not good at NS/DNS/NAT/firewall/routing/port forwarding/networking etc. "faraday" is the intranet name. Everyone within local network can access all applications hosted on "faraday". Hostname for webserver is "www", FTP server is "ftpserver". Both servers running RHEL4 OS. The goal is to allow anyone outside the company network (public) to access only one of the many applications on "faraday". Hope somebody can help me with some of the questions below, if not all. From zoneedit record, the static IP is used by FTP now. Can I use the same existing static IP - 219.95.10.100, for web service? Currently anyone who enter "http://www.abc.com.my" will be directed to "http://www.abc.com". I don't want this to change. Currently, no one else, except employee on local network, can access "faraday" web pages. How to configure so that when anyone type "http://thisapp.abc.com.my" on their web browser, the url will lead them to "http://faraday/thisapp" (application folder is /var/www/html/thisapp on RHEL4 web server). If possible, how to set the URL will continue to show "http://thisapp.abc.com.my" instead of "http://faraday/thisapp" How to limit/restrict user (those who are not from local network) so they only have access to "http://thisapp.abc.com.my", but not "http://faraday" or "http://faraday/anotherapp", etc. What's the configuration changes needed in /etc/httpd.conf on web server? Company domain name is "abc.com.my". Following is the zone records on www.zoneedit.com. Subdomain Type IP sdsl A 219.95.10.100 ftp CNAME sdsl.abc.com.my @ NS ns3.zoneedit.com @ NS ns7.zoneedit.com WebForward record: New Domain Destination Cloaked www.abc.com.my http://www.abc.com N On my local DNS server, there are 2 zone files: abc.com.my and pnmy.abc.com. > cat abc.com.my.zone ftp CNAME ftp.pnmy.abc.com. sdsl A 219.95.10.100 > cat pnmy.abc.com.zone ftp CNAME ftpserver ftpserver A 172.16.5.1 faraday CNAME www www A 172.16.5.2

    Read the article

  • Put a link on the nav bar in Wordpress

    - by Rafe Kettler
    I have a Wordpress blog. On the same domain, I have some other stuff hosted that isn't part of my WP install. I want to link to those other places on my domain from the top menu bar (nav bar) on my blog. How can I do that? The theme is Lightword, relevant header.php code follows: <body <?php body_class(); ?>> <div id="wrapper"> <?php lightword_header_image(); ?> <div id="header"> <?php lightword_rss_feed(); ?> <div id="top_bar"> <div class="center_menu"> <ul id="front_menu" <?php global $lw_remove_searchbox, $lw_use_wp_menus; $lw_menu_width = ""; if($lw_remove_searchbox == "true") $lw_menu_width = " class=\"expand\" "; echo $lw_menu_width; ?>> <?php echo lightword_homebtn(__('Home','lightword')); ?> <?php if ( function_exists('wp_nav_menu') && $lw_use_wp_menus != "true") { $lightword_menu = wp_nav_menu( array( 'menu' => 'lightword_top_menu', 'echo' => false, 'menu_id' => 'front_menu', 'container' => '', 'theme_location' => 'lightword_top_menu', 'link_before' => '<span>', 'link_after' => '</span>' ) ); $lightword_menu = preg_replace( array( '/^<ul id="front_menu" class="menu">/', '/\n<\/ul>$/' ), '', $lightword_menu); echo $lightword_menu; }else{ echo lightword_wp_list_pages(); } ?> </ul> </div> <?php echo lightword_searchbox(); ?> </div> </div> <div id="content">

    Read the article

  • Multiple country-specific domains or one global domain [closed]

    - by CJM
    Possible Duplicate: How should I structure my urls for both SEO and localization? My company currently has its main (English) site on a .com domain with a .co.uk alias. In addition, we have separate sites for certain other countries - these are also hosted in the UK but are distinct sites with a country-specific domain names (.de, .fr, .se, .es), and the sites have differing amounts of distinct but overlapping content, For example, the .es site is entirely in Spanish and has a page for every section of the UK site but little else. Whereas the .de site has much more content (but still less than the UK site), in German, and geared towards our business focus in that country. The main point is that the content in the additional sites is a subset of the UK, is translated into the local language, and although sometimes is simply only a translated version of UK content, it is usually 'tweaked' for the local market, and in certain areas, contains unique content. The other sites get a fraction of the traffic of the UK site. This is perfectly understandable since the biggest chunk of work comes from the UK, and we've been established here for over 30 years. However, we are wanting to build up our overseas business and part of that is building up our websites to support this. The Question: I posed a suggestion to the business that we might consider consolidating all our websites onto the .com domain but with /en/de/fr/se/etc sections, as plenty of other companies seem to do. The theory was that the non-english sites would benefit from the greater reputation of the parent .com domain, and that all the content would be mutually supporting - my fear is that the child domains on their own are too small to compete on their own compared to competitors who are established in these countries. Speaking to an SEO consultant from my hosting company, he feels that this move would have some benefit (for the reasons mentioned), but they would likely be significantly outweighed by the loss of the benefits of localised domains. Specifically, he said that since the Panda update, and particularly the two sets of changes this year, that we would lose more than we would gain. Having done some Panda research since, I've had my eyes opened on many issues, but curiously I haven't come across much that mentions localised domain names, though I do question whether Google would see it as duplicated content. It's not that I disagree with the consultant, I just want to know more before I make recommendations to my company. What is the prevailing opinion in this case? Would I gain anything from consolidating country-specific content onto one domain? Would Google see this as duplicate content? Would there be an even greater penalty from the loss of country-specific domains? And is there anything else I can do to help support the smaller, country-specific domains?

    Read the article

  • When a problem is resolved

    - by Rob Farley
    This month’s T-SQL Tuesday is hosted by Jen McCown, and she’s picked the topic of Resolutions. It’s a new year, and she’s thinking about what people have resolved to do this year. Unfortunately, I’ve never really done resolutions like that. I see too many people resolve to quit smoking, or lose weight, or whatever, and fail miserably. I’m not saying I don’t set goals, but it’s not a thing for New Year. The obvious joke is “1920x1080” as a resolution, but I’m not going there. I think Resolving is a strange word. It makes it sound like I’m having to solve a problem a second time, when actually, it’s more along the lines of solving a problem well enough for it to count as finished. If something has been resolved, a solution has been provided. There is a resolution, through the provision of a solution. It’s a strangeness of English. When I look up the word resolution at dictionary.com, it has 12 options, including “settling of a problem”. There’s a finality about resolution. If you resolve to do something, you’re saying “Yes. This is a done thing. I’m resolving to do it, which means that it may as well be complete already.” I like to think I resolve problems, rather than just solving them. I want my solving to be final and complete. If I tune a query, I don’t want to find that I’m back in there, re-tuning it at some point. Strangely, if I re-solve a problem, that implies that I didn’t resolve it in the first place. I only solved it. Temporarily. We “data-folk” live in a world where the most common answer is “It depends.” Frustratingly, the thing an answer depends on may still be changing in the system in question. That probably means that any solution that is put in place may need reinvestigating at some point later. So do I resolve things? Yes. Am I Chuck Norris, and solve things so well the world would break first? No. Do these two claims happily sit beside each other? No, unfortunately not. But I happily take responsibility for things, and let my clients depend on me to see it through. As far as they are concerned, it is resolved. And so I resolve to keep resolving, right through 2011.

    Read the article

  • What layer to introduce human readable error messages?

    - by MrLane
    One of the things that I have never been happy with on any project I have worked on over the years and have really not been able to resolve myself is exactly at what tier in an application should human readable error information be retrieved for display to a user. A common approach that has worked well has been to return strongly typed/concrete "result objects" from the methods on the public surface of the business tier/API. A method on the interface may be: public ClearUserAccountsResult ClearUserAccounts(ClearUserAccountsParam param); And the result class implementation: public class ClearUserAccountsResult : IResult { public readonly List<Account> ClearedAccounts{get; set;} public readonly bool Success {get; set;} // Implements IResult public readonly string Message{get; set;} // Implements IResult, human readable // Constructor implemented here to set readonly properties... } This works great when the API needs to be exposed over WCF as the result object can be serialized. Again this is only done on the public surface of the API/business tier. The error message can also be looked up from the database, which means it can be changed and localized. However, it has always been suspect to me, this idea of returning human readable information from the business tier like this, partly because what constitutes the public surface of the API may change over time...and it may be the case that the API will need to be reused by other API components in the future that do not need the human readable string messages (and looking them up from a database would be an expensive waste). I am thinking a better approach is to keep the business objects free from such result objects and keep them simple and then retrieve human readable error strings somewhere closer to the UI layer or only in the UI itself, but I have two problems here: 1) The UI may be a remote client (Winforms/WPF/Silverlight) or an ASP.NET web application hosted on another server. In these cases the UI will have to fetch the error strings from the server. 2) Often there are multiple legitimate modes of failure. If the business tier becomes so vague and generic in the way it returns errors there may not be enough information exposed publicly to tell what the error actually was: i.e: if a method has 3 modes of legitimate failure but returns a boolean to indicate failure, you cannot work out what the appropriate message to display to the user should be. I have thought about using failure enums as a substitute, they can indicate a specific error that can be tested for and coded against. This is sometimes useful within the business tier itself as a way of passing via method returns the specifics of a failure rather than just a boolean, but it is not so good for serialization scenarios. Is there a well worn pattern for this? What do people think? Thanks.

    Read the article

  • Why would you dual-run an app on Azure and AWS?

    - by Elton Stoneman
    Originally posted on: http://geekswithblogs.net/EltonStoneman/archive/2013/11/10/why-would-you-dual-run-an-app-on-azure-and-aws.aspxI had this question from a viewer of my Pluralsight course, Implementing the Reactive Manifesto with Azure and AWS, and thought I’d publish the response. So why would you dual-run your cloud app by hosting it on Azure and AWS? Sounds like a lot of extra development and management overhead. Well the most compelling reasons are reliability and portability. In 2012 I was working for a client who was making a big investment in the cloud, and at the end of the year we published their first external API for business partners. It was hosted in Azure and used some really nice features to route back into existing on-premise services. We were able to publish a clean, simple API to partners, and hide away the underlying complexity of the internal services while still leveraging them to do all the work. Two days after we went live, we were hit by the Azure SSL certificate expiry outage, and our API was unavailable for the best part of 3 days. Fortunately we had planned a gradual roll-out to partners, so the impact was minimal, but we’d been intending to ramp up quickly, and if the outage had happened a week or two later we would have been in a very bad place. Not least because our app could only run on Azure, we couldn’t package it up for another service without going back and reworking the code. More recently AWS had an issue with a networking device in one of their data centres which caused an outage that took the best part of a day to resolve. In both scenarios the SLAs are worthless, as you’ll get back a small percentage of your cloud expenditure, which is going to be negligible compared to your costs in dealing with the outage. And if your app is built specifically for AWS or Azure then if there’s an extended outage you can’t just deploy it onto a new set of kit from a different supplier. And the chances are pretty good there will be another extended outage, both for Microsoft and for Amazon. But the chances are small that it will happen to both at the same time. So my basic guidance has been: ignore the SLAs, go for better uptime by using two clouds. As soon as you need to scale beyond a single instance, start by scaling out to another cloud. Then scale out to different data centres in both clouds. Then you’ve got dual-cloud, quadruple-datacentre redundancy, so any more scaling you need can be left to the clouds to auto-scale themselves. By running in both clouds, you’ve made your app portable, so in the highly unlikely event that both AWS and Azure go down in multiple regions, you’ll have a deployment package which will let you spin up a new stack on yet another cloud, without having to rework your solution.

    Read the article

  • How to move a website and domain name without experiencing downtime for emails or site?

    - by user4842
    Okay, I have a pretty complex problem, so I'll get right to it. I'm a designer who built a new website for my client. Their old site is hosted at GoDaddy, as well as their email. Problem is, the guy who built the original site decided to put the original domain name and hosting under HIS personal GoDaddy account. Well, that turned out to be a bad move for several reasons. Here's how it's all tied together. The original domain name, www.domainoriginal.com, was actually purchased at Network Solutions. The original web designer pointed the nameservers from Network Solutions to his GoDaddy account, where the email and hosting is setup. The new domain name, www.domainnew.com, was purchased under a new and separate GoDaddy account belonging to the company, and the new website was built under a 3rd party platform (Big Commerce). So, the www.domainnew.com is already pointed to the new website using A records at new GoDaddy account. All is fine there. However, they still need www.domainoriginal.com to point to the NEW website as well. (The old one can simply be deleted, it is NOT important). AND, they want to keep their old email addresses intact and working as well, but under the NEW GoDaddy account. Obviously, I have no DNS control at Network Solutions, and I have no idea what kind of control I have at GoDaddy under the old account because the web designer will not let me see inside his account. But, he and GoDaddy both tell me nothing can be done other than to repoint the nameservers to Network Solutions, and then repoint the A record to my new website, www.domainnew.com, and point the MX Records to GoDaddy. I'm told the downtime would be 24-48 hours if I do this. Ideally, we'd like to do a domain name transfer and get www.domainoriginal.com in the new GoDaddy account created by the company. But, I'm told this could take up to 7 days. Does this mean the site and email will be down for 7 days? And any emails sent during this time, would they be lost forever? If I do this, how long could I expect the site and email to go down? And, will the emails be permanently lost? I've gotten different answers from everybody at GoDaddy so I kind of don't trust them anymore... Any help would be greatly appreciated Thanks, Tyson

    Read the article

  • Goodbye, Spreadsheets and Hello Modern ERP

    - by Christine Randle
    By: Steve Cox, Vice President, Oracle Accelerate for Midsize Companies     Signs of the resurging economy continue to sprout, with green shoots rising across different sectors and industries. With the economy on the rebound, businesses are increasing their investment in technology to keep up with growth and evolving demands; as proof, Gartner recently increased its worldwide IT spending forecast for 2012 to $3.6 trillion, anticipating a 3 percent increase from 2011 spending.   One of the segments most reliant on technology to catapult growth is midsize companies – established businesses leveraging every competitive efficiency and advantage to compete with much larger enterprises. We find that to compete against the big guys, they need to create an internal technology infrastructure to fuel that growth. Goodbye, spreadsheets and hello modern ERP.   While many businesses postponed upgrading or replacing financial and HR management systems during the recession, now some have started dusting off RFPs and revisiting technology options. Years ago, midsize organizations used spreadsheet-based systems and processes to manage employees, customers, partners, products and revenue. We’ve found that as companies scale up, they are apt to avoid heavily customizing their existing systems, and instead are more prone to standardize on a modern, enterprise-class ERP system.   Modern ERP platforms enable growing companies to immediately address the most pressing challenges – accounting, talent management, customer retention, et. al. Midsize companies implement these systems and processes to help them earn more, go public or expand globally.   And today, choice is a primary factor when selecting an ERP solution. Businesses have more deployment options now than ever before, depending on their unique structures and needs. Whether the preference is on demand, cloud, hosted or on premise, a modular, scalable deployment is available to meet the need.   With modern ERP systems, business that once struggled to do more with fewer resources have access to the same quality tools as larger competitors. By adopting top tier ERP systems tailored to individual business needs, midsize companies can support business operations while creating an enterprise system that seamlessly scales up to fuel future growth. Meaning that the ERP decision that your company makes today, will have legs to serve your business for years to come.

    Read the article

  • SQL 2014 does data the way developers want

    - by Rob Farley
    A post I’ve been meaning to write for a while, good that it fits with this month’s T-SQL Tuesday, hosted by Joey D’Antoni (@jdanton) Ever since I got into databases, I’ve been a fan. I studied Pure Maths at university (as well as Computer Science), and am very comfortable with Set Theory, which undergirds relational database concepts. But I’ve also spent a long time as a developer, and appreciate that that databases don’t exactly fit within the stuff I learned in my first year of uni, particularly the “Algorithms and Data Structures” subject, in which we studied concepts like linked lists. Writing in languages like C, we used pointers to quickly move around data, without a database in sight. Of course, if we had a power failure all this data was lost, as it was only persisted in RAM. Perhaps it’s why I’m a fan of database internals, of indexes, latches, execution plans, and so on – the developer in me wants to be reassured that we’re getting to the data as efficiently as possible. Back when SQL Server 2005 was approaching, one of the big stories was around CLR. Many were saying that T-SQL stored procedures would be a thing of the past because we now had CLR, and that obviously going to be much faster than using the abstracted T-SQL. Around the same time, we were seeing technologies like Linq-to-SQL produce poor T-SQL equivalents, and developers had had a gutful. They wanted to move away from T-SQL, having lost trust in it. I was never one of those developers, because I’d looked under the covers and knew that despite being abstracted, T-SQL was still a good way of getting to data. It worked for me, appealing to both my Set Theory side and my Developer side. CLR hasn’t exactly become the default option for stored procedures, although there are plenty of situations where it can be useful for getting faster performance. SQL Server 2014 is different though, through Hekaton – its In-Memory OLTP environment. When you create a table using Hekaton (that is, a memory-optimized one), the table you create is the kind of thing you’d’ve made as a developer. It creates code in C leveraging structs and pointers and arrays, which it compiles into fast code. When you insert data into it, it creates a new instance of a struct in memory, and adds it to an array. When the insert is committed, a small write is made to the transaction to make sure it’s durable, but none of the locking and latching behaviour that typifies transactional systems is needed. Indexes are done using hashes and using bw-trees (which avoid locking through the use of pointers) and by handling each updates as a delete-and-insert. This is data the way that developers do it when they’re coding for performance – the way I was taught at university before I learned about databases. Being done in C, it compiles to very quick code, and although these tables don’t support every feature that regular SQL tables do, this is still an excellent direction that has been taken. @rob_farley

    Read the article

  • How to enable and connect to RDP on a Windows Azure Web Role Instance?

    - by Enrique Lima
    We all know there have been some updates to Windows Azure, and one of the biggest I would say is the capability of being able to remote into the “OS level” of the image running a role.  And I am not talking about VM Role, I am talking about a Web Role for example. As developers we use Visual Studio, and when we are getting ready to deploy a project, we have the option of enabling this. Here is how: 1.  We publish our Project 2. On the Deployment dialog, provide all the details for your account, and before clicking OK, click on Configure Remote Desktop connections. 3.  Enable connections and the rest of the configuration.  Now, here is where there is an extra set of steps.  The first thing to know: The certificate used here is different from the other certs you have in place.  I created a new one, the went into certmgr.msc, then to Personal, then I selected the cert I just created.  Did a right-click, then All Tasks > Export.  Because what is needed is a pfx package, make sure when exporting you select to export the private key. 4. Click OK, on the Remote Desktop Configuration screen, now before you click OK on the Deployment, you will need to visit the Azure Portal. And perform the following: Go to your hosted services. Then with the service available, select the Certificates folder location. Then, select Add Certificate from the toolbar (more like Azure Portal Ribbon) Provide the details to upload the recently create pfx file. That will create the Certificate. Click OK on the deployment dialog, this kick off the deployment process. 5. Now, we need to go to the Windows Azure Portal.  Here we will select the Web Role deployed and Configure RDP. 6. Time to test.  Click on the Instance (not the role), this will make the Remote Access Connect Button available.  A file will start the process to be downloaded too 7. You will then be prompted for the credentials you configured. 8.  Validate connectivity … 9. Open IIS Manager … From here on, this is a way to manage and work with your Instance.

    Read the article

  • What are different ways to reduce latency between a server and a web application? [closed]

    - by jjoensuu
    this is a question about a web application that provides SOAP web services. For the sake of this question, this web app is hosted on a server SERVER B which is located in California. We have an automated, scheduled, process running on a server SERVER A, located in New York. This scheduled process is supposed to send SOAP messages to SERVER B every so often, but this process typically dies soon after starting. We have now been told by the vendor that reason why the process dies is because of the latency between SERVER A and SERVER B. The data traffic is routed through many diffent public networks. There is no dedicated line between SERVER A and SERVER B. As a result I have been asked to look into ways to reduce the latency between SERVER A and SERVER B. So I wanted to ask, what are the different ways to reduce latency in a situation like this? For example, would it help to switch from HTTPS to some other secure protocol? (the thought here is that perhaps some other alternative would require fewer handshakes than HTTPS). Or would a VPN help? If a VPN would reduce the latency, how would it do that? NOTE: I am not looking for an explicit answer that would work in my specific situation. I am more like looking just for a simple list of what technologies could be used for this. I will still have to evaluate the technologies and discuss them internally with others, so the list would just be a starting point. Here I am assuming that there exists very few ways to reduce latency between two servers that communicate across public networks using HTTPS. Feel free to correct me if this assumption is wrong and please ask if there is a need for specific information. NOTE 2: A list of technologies is a specific answer to the question I stated in the title. NOTE 3: Its rather dumb to close this question when it is after all about me looking for information and furthermore this information can clearly be useful for others. Anyway luckily there are other sites where I can ask around. StackExchange seems to attached to their own philosophical principles. Many thanks

    Read the article

  • Curating projects of deceased friends

    - by Ant
    A very good friend of mine, and an avid programmer, recently passed away. He left nearly 40 projects on BitBucket. Most of them are public, but a few of them are marked as private. I've decided to take on curation duties for the projects rather than leave his work to disappear. If you have been in the same situation, what did you do? Did you open-source everything? Continue development? Delete it all? I'm very interested to hear other people's experiences. There are a few reasons why some of the projects are marked as private (private projects on BitBucket are visible only to invited users and the original creator): One of them is an iOS web app that was free in the app store. I've had to remove the app from the store as I'm shutting down his web sites as a favour to his widow. However, I've already made the app public under the GPL v3 (he was a big GPL supporter). One of them contains proprietary code. It can't be open-sourced. Others are very much work-in-progress. I don't know if he intended to make them into hosted, paid services or if he wanted to give the code away under an open-source licence when they were finished. Here's a list of the private projects: Some kind of living cell simulator that uses SBML along with Runge-Kutta and Euler algorithms to do... something. There's a fair amount of code here but I don't know what it does or how far along it is. No docs. An accountacy application; it seems to have a solid DB design behind it but there's little code on top of that. A website whose purpose is to suggest good restaurants. Built on yii. Seems to have a lot of code but I'd need to set up a WAMP stack to see how far along it is. A website intended to host memorials to people who suffered from the same problem he was. Built on Joomla. I'm not sure how much of the code is just Joomla and how much is custom; again, I'd need to get Joomla running to find out. I'd just introduced him to Mercurial and BitBucket. All of the private projects are single commits of codebases he wasn't using version control with/was previously using SVN. I don't have the SVN repositories so I can't see the commit logs.

    Read the article

  • ??????!?Java??????????

    - by rika.tokumichi
    Text by ?? ??(?????????? Fusion Middleware?????? - ???????????) IT??????????????????????????? ????????? ???????????? ???????????????????????????1??????????????????????????????????????????????? ???:?????????? Oracle Direct Seminar ?Java ?????????????????????????(2009?) ?????????????????·???????????????????????????????????????????????????????????????????????????????????????????????????????????? ???????????????????????????? ??????????????Java???????????????????? ???????????Java??????? ??????????????????????Java???????????????? ¦Apache Struts Java ?Web?????????????????????????????????????????????????? ???????????????????????????????????????????????? >????? ¦Spring Framework Dependency Injection(DI; ??????)?????????????????????????? DI???????????????????????????????????????????????Java????????????????????????????????????????? >????? >Oracle and Spring(??) ¦Apache log4j ?????????????????? ??????(???????)????????????????????????????????Apache log4j????????????????????????????????????????? ???Microsoft .Net(log4Net)?C++(log4cxx)?PHP(log4php)??Java??????????????????????????? >????? ¦JUnit JUnit?Java????????????????????????????????????????????????????????? Eclipse?NetBeans?JDeveloper??????????????????????????????????????????????????????????? >????? ?????????Java??????? ??????????????Java?????????????????? ¦Oracle TopLink ?????????Java???????????????????????????????????????????????????????????????????????????????????????? ??????????????????????O/R(Object/Relational)?????·?????????Oracle TopLink???????? O/R???????????Java Persistence API(JPA)???Java?????????????????Oracle TopLink????????????????????????????????????????? >????:Oracle TopLink >??????? ¦Oracle Application Development Framework(ADF) Web??????/?????????????????(???????)?????????????????????????????????? ?????????????????????????????????????????????????? Oracle ADF????????????·????????????????????????????Oracle TopLink?Apache Struts?EJB?JavaServer Faces???????????????????????????????????????????????? >????:Oracle Application Development Framework >??????? >Oracle ADF Overview Demo(??) ¦Oracle ADF Faces Oracle ADF Faces??JavaServer Faces(JSF)?????????·???????????????????????????? Ajax????????150???UI????????ADF Faces Rich Client??????????·?????????ADF Data Visualization Components???????????????? >????:Oracle ADF Faces Rich Client Components >??????? >Oracle ADF Faces Components Hosted Demo >Oracle JDeveloper 11g??????? ¦Oracle WebCenter Framework Oracle ADF?????????·????????Enterprise 2.0???????????????????Oracle WebCenter Framework??? ????????????????????????????????????????????????????????????????????????????? >????:Oracle WebCenter Suite >??????? ??????????????WebLogic Server?????

    Read the article

  • Importing AMIs from Hyper-v VHDX

    - by jwdaigle
    I have a couple of VHDX files that we use to template locally hosted VMs. I would like to try (some) of these on Amazon, so I need to build an AMI to upload to AWS. I have found http://aws.amazon.com/ec2/vmimport/ , which is very helpful to get started. It appears that AWS does not yet support VHDX, so I found some info that told me to export it out of Hyper-V as a VHD file, and then convert/upload it, which I am in the process of trying out. But the real question is that when I was looking for info, I came across http://stackoverflow.com/questions/14346114/unable-to-rdp-to-ec2-instance . The AWS documentation seems to imply that all I need to do is run the importer and all will be well. True? I dont want to waste all the upload bandwidth and then find out it wont work. Is there something I need to install into the Hyper-V VM before converting/upload it using the AWS command line tools? EC2-Config? Any help greatly appreciated -

    Read the article

  • Teamcity build agent gives 504 gateway timeout

    - by Anthony
    I have a new teamcity build agent machine, which when started up tries to connect to the build server and fails. It never shows up in the connected, disconnected or unauthorised agents tabs of the build server web interface. The logs on the build agent show that it fails to connect with a 504 gateway timeout. This is from teamcity-agent.log [2012-09-04 15:34:59,776] INFO - buildServer.AGENT.registration - Registering on server http://10.0.10.16, AgentDetails{Name='my-local', AgentId=null, BuildId=null, AgentOwnAddress='10.0.1.14', AlternativeAddresses=[10.0.10.32], Port=8080, Version='21424', PluginsVersion='21424-md5-somechecksum', AvailableRunners=[ABunchOfPlugins], AvailableVcs=[SomeRunners], AuthorizationToken='sometoken'} [2012-09-04 15:35:53,606] WARN - buildServer.AGENT.registration - Call http://10.0.10.16/RPC2 buildServer.registerAgent3: org.apache.xmlrpc.XmlRpcClientException: Server returned incorrect status code: 504 Gateway Time-out [2012-09-04 15:35:53,606] WARN - buildServer.AGENT.registration - Connection to TeamCity server is probably lost. Will be trying to restore it. Take a look at logs/teamcity-agent.log for details (unless you're using custom logging). (I have edited some identifying data out of this log excerpt) But I can reach the build server. In fact, tracert shows that it is very nearby. Tracing route to TEAMCITYSERVER [10.0.10.16] over a maximum of 30 hops: 1 <1 ms <1 ms <1 ms 10.0.2.1 2 <1 ms <1 ms <1 ms TEAMCITYSERVER [10.0.10.16] Trace complete. I can see a teamcity login page if I hit http://10.0.10.16 in the browser. The teamcity service is logging in as the same (local administrator) account as I used to log in and test the network. The build agent is a windows 2008 server VM hosted on Ubuntu 12.04 under Oracle VirtualBox. I have disabled firewalls on both the Windows and Ubuntu machines. Other VMS with similar configuration can connect fine and do not report this error. What can possibly be preventing this connection?

    Read the article

  • Mounting an Azure blob container in a Linux VM Role

    - by djechelon
    I previously asked a question about this topic but now I prefer to rewrite it from scratch because I was very confused back then. I currently have a Linux XS VM Role in Azure. I basically want to create a self-managed and evoluted hosting service using VMs rather than Azure's more-expensive Web Roles. I also want to take advantage of load balancing (between VM Roles) and geo-replication (of Storage Roles), making sure that the "web files" of customers are located in a defined and manageable place. One way I found to "mount" a drive in Linux VM is described here and involves mounting a VHD onto the virtual machine. From what I could learn, the VHD is reliably-stored in a storage role, and is exclusively locked by the VM that uses it. Once the VM Role has its drive I can format the partition to any size I want. I don't want that!! I would like each hosted site to have its own blob directory, then each replicated/load-balanced VM Role to rw mount like in NFS that blob directory to read HTML and script files. The database is obviously courtesy of Microsoft :) My question is Is it possible to actually mount a blob storage into a directory in the Linux FS? Is it possible in Windows Server 2008?

    Read the article

  • Timer_EntityBody, Timer_ConnectionIdle and Connection Closed Unexpectly

    - by ihsany
    We have a windows application, it connects to a web service (XML web service hosted on a Windows 2008 Server IIS 7.5, no antivirus) and fetches some data to the client. But sometimes (around 5%-10% of the requests), it gives an error when trying to connect web service. Here is the client application error log; Exception:System.Net.WebException: The underlying connection was closed: The connection was closed unexpectedly. at System.Web.Services.Protocols.WebClientAsyncResult.WaitForResponse() at System.Web.Services.Protocols.WebClientProtocol.EndSend(IAsyncResult asyncResult, Object& internalAsyncState, Stream& responseStream) at System.Web.Services.Protocols.SoapHttpClientProtocol.EndInvoke(IAsyncResult asyncResult) at APPClient.APPFPService.WEBService.EndAddMoney(IAsyncResult asyncResult) at APPClient.BLL.ServiceAgent.AddMoneyCallback(IAsyncResult ar) From other hand, on the web server, i checked HTTP error logs and i see a long file like this; 2014-06-05 14:02:04 65.82.178.73 53798 SERVER.IP.ADDRESS 80 - - - - - Timer_ConnectionIdle - 2014-06-05 14:07:24 76.109.81.223 58985 SERVER.IP.ADDRESS 80 - - - - - Timer_ConnectionIdle - 2014-06-05 14:07:39 76.109.81.223 2803 SERVER.IP.ADDRESS 80 - - - - - Timer_ConnectionIdle - 2014-06-05 14:08:59 76.109.81.223 52656 SERVER.IP.ADDRESS 80 - - - - - Timer_ConnectionIdle - 2014-06-05 14:09:05 65.82.178.73 53904 SERVER.IP.ADDRESS 80 HTTP/1.1 POST /webservice/webservice.asmx - 2 Timer_EntityBody SYPService 2014-06-05 14:10:55 50.186.180.191 50648 SERVER.IP.ADDRESS 80 - - - - - Timer_ConnectionIdle - Here is a similar situation but it did not help me. UPDATE: When i checked the IIS logs, i see some issues like these; cs-method cs-uri-stem sc-status sc-win32-status time-taken cs-version POST /webservice/webservice.asmx 400 64 46 HTTP/1.1 POST /webservice/webservice.asmx 400 64 134675 HTTP/1.1 POST /webservice/webservice.asmx 400 64 37549 HTTP/1.1 POST /webservice/webservice.asmx 400 64 109 HTTP/1.1 POST /webservice/webservice.asmx 400 64 31 HTTP/1.1 POST /webservice/webservice.asmx 400 64 0 HTTP/1.1 POST /webservice/webservice.asmx 400 64 15 HTTP/1.1 sc-win32-status 64 : The specified network name is no longer available. sc-status 400 : Bad request Also some requests takes around 130 seconds, but some of less than 1 second. This is a windows application which connects to a web service for process some data. There is not a query takes around 130 seconds on the database.

    Read the article

  • Updating Windows DNS records from a remote windows DNS server

    - by Luckyboy
    Does anyone know if it is possible for a windows 2003 DNS server to update the records for a domain so that it contains all the records of a domain of of a remotely based DNS server? Im almost certain that doesn't quite explain the problem so I shall illustrate with an example: We have two offices, both are based about 100 miles apart. One deals with IT (Intranet development etc.) while the other is a call centre that uses the Intranet systems. Currently each office has its own DNS server, with the IT office's and call centre's DNS servers containing entries for intranet site. The difference is that the IT DNS server records point to the various servers that host the Intranet sites (e.g. intranetsite1 - 192.168.1.10, intranetsite2 - 192.168.1.11) while all of the entries in the call centre's DNS point to the IT office's DNS server (intranetsite1 - [it office ip address], intranetsite2 - [it office ip address]). Is there any way that the call centre's DNS server could automatically add all DNS records hosted by the IT office's DNS, translating the IP addresses to the IP address of the IT office?

    Read the article

  • Postfix unable to create lock file, permission denied

    - by John Bowlinger
    I thought I had my postfix configuration all set up on my Amazon Ubuntu server but I guess not. I'm trying to set up an admin email account for 3 virtually hosted Apache websites. Here's my postfix main.cf file: myhostname = ip-XX-XXX-XX-XXX.us-west-2.compute.internal alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = ip-XX-XXX-XX-XXX.us-west-2.compute.internal, localhost.us-west-2.compute.internal, , localhost relayhost = mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all virtual_mailbox_domains = example1.com, example2.com, example3.com virtual_mailbox_base = /var/mail/vhosts virtual_mailbox_maps = hash:/etc/postfix/vmailbox virtual_minimum_uid = 100 virtual_uid_maps = static:115 virtual_gid_maps = static:115 virtual_alias_maps = hash:/etc/postfix/virtual Here's my vmailbox file: [email protected] example1.com/admin [email protected] example2.com/admin [email protected] example3.com/admin @example1.com example1.com/catchall @example2.com example2.com/catchall @example3.com example3.com/catchall And finally my virtual file: [email protected] postmaster [email protected] postmaster [email protected] postmaster When I try to send an email to through netcat to my one of my domains, I get: unable to create lock file /var/mail/vhosts/example1.com/admin.lock: Permission denied This is despite the fact that I set example1.com group to postfix and also my virtual_uid_maps and virtual_gid_maps are both set to Postfix group id of 115.

    Read the article

  • Installing gitosis and closed port?

    - by Nicolas GUILLAUME
    I'm trying to install gitosis on a Server (hosted by OVH and running Ubuntu server 11.04). I've done it a few times and never had any problems. But this time I have something very wired when I simply try to clone gitosis. [root@ovks-1:~/]#git clone git://eagain.net/gitosis.git Cloning into gitosis... eagain.net[0: 208.78.102.120]: errno=Connection refused fatal: unable to connect a socket (Connection refused) zsh: exit 128 git clone git://eagain.net/gitosis.git Based on my searches it looks like the port 9418 is closed. But I don't understand, a server by definition shouldn't have any closed port and I can't find a way to see if they are. So how can I check is a port is open and how can I open it if closed? Thank you for your help. Requested by WesleyDavid: iptables -L result [root@odeoos-vks-1:~/]#iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination I have no idea what it means... Thanks :)

    Read the article

  • Router for creating site to site VPN to server provider using Cisco ASA 5540

    - by fondie
    We have dedicated servers hosted for us by a third party, we connect to these over a VPN. My server provider uses Cisco ASA 5540 as VPN devices. Currently we're using software clients on individual machines to connect to this VPN, either: Cisco VPN Client Shrew Soft VPN Connect However, I'm looking to purchase a new load balancing router for our office and thought this could be an opportunity to get VPN client duties taken over by hardware. We could then create a permanent VPN tunnel that could be used by anyone on the network with no software client necessary. Sadly I'm not the most knowledgeable on this kind of stuff so is: 1) This a realizable goal? Next I need to know what kind of hardware I will need. I'm not looking to spend lots of money on this (~$500), so doubtful I can afford any Cisco kit. Therefore, this is the most promising candidate I've seen (as far as my limited knowledge goes): Draytek Vigor 2955 - http://www.draytek.co.uk/products/vigor2955.html 2) Would this be compatible with the Cisco kit my server provider uses? 3) If not, are there any alternatives I should consider? Many thanks in advance.

    Read the article

< Previous Page | 98 99 100 101 102 103 104 105 106 107 108 109  | Next Page >