Search Results

Search found 19690 results on 788 pages for 'result partitioning'.

Page 408/788 | < Previous Page | 404 405 406 407 408 409 410 411 412 413 414 415  | Next Page >

  • Mutating Programming Language?

    - by MattiasK
    For fun I was thinking about how one could build a programming language that differs from OOP and came up with this concept. I don't have a strong foundation in computer science so it might be common place without me knowing it (more likely it's just a stupid idea :) I apologize in advance for this somewhat rambling question :) Anyways here goes: In normal OOP methods and classes are variant only upon parameters, meaning if two different classes/methods call the same method they get the same output. My, perhaps crazy idea, is that the calling method and class could be an "invisible" part of it's signature and the response could vary depending on who call's an method. Say that we have a Window object with a Break() method, now anyone (who has access) could call this method on Window with the same result. Now say that we have two different objects, Hammer and SledgeHammer. If Break need to produce different results based on these we'd pass them as parameters Break(IBluntObject bluntObject) With a mutating programming language (mpl) the operating objects on the method would be visible to the Break Method without begin explicitly defined and it could adopt itself based on them). So if SledgeHammer calls Window.Break() it would generate vastly different results than if Hammer did so. If OOP classes are black boxes then MPL are black boxes that knows who's (trying) to push it's buttons and can adapt accordingly. You could also have different permission sets on methods depending who's calling them rather than having absolute permissions like public and private. Does this have any advantage over OOP? Or perhaps I should say, would it add anything to it since you should be able to simply add this aspect to methods (just give access to a CallingMethod and CallingClass variable in context) I'm not sure, might be to hard to wrap one's head around, it would be kinda interesting to have classes that adopted themselves to who uses them though. Still it's an interesting concept, what do you think, is it viable?

    Read the article

  • Samba shares won't automount on boot from fstab

    - by kelvin
    This question seems to have been asked a few times, but doesn't seem anybody has really solved it yet, at least not for my specific circumstance. I have FSAT setup to mount a CIFs share, but on boot up the share never gets mounted. However, if i run mount -a after boot up, it mounts everything just fine. Here's what my fstab looks like. Ignore the commented ones... I just did a few for testing purposes right now. //192.168.1.97/media /mnt/samba cifs credentials=/home/xbmc/.smbcredentials,rw,file_mode=0777,dir_mode=0777,sec=ntlm,auto 0 0 #//192.168.1.97/media/TV\040Shows /home/xbmc/TV\040Shows cifs credentials=/home/xbmc/.smbcredentials,rw,file_mode=0777,dir_mode=0777,sec=ntlm,auto //192.168.1.97/media/Movies /home/xbmc/Movies cifs credentials=/home/xbmc/.smbcredentials,rw,file_mode=0777,dir_mode=0777,sec=ntlm,auto 0 0 //192.168.1.97/media/Music /home/xbmc/Music cifs credentials=/home/xbmc/.smbcredentials,rw,file_mode=0777,dir_mode=0777,sec=ntlm,auto 0 0 #//192.168.1.97/media/3\040-\040My\040Pictures /home/xbmc/Pictures cifs credentials=/home/xbmc/.smbcredentials,rw,file_mode=0777,dir_mode=0777,sec=ntlm,auto #//192.168.1.97/media/XBMC /home/xbmc/Admin cifs credentials=/home/xbmc/.smbcredentials,rw,file_mode=0777,dir_mode=0777,sec=ntlm,auto Have seen a few things on the internet where it was believed its because the share isn't available yet (i.e. wifi not connected yet, etc) when it's attempting to mount. 1) Is there anyway to confirm that's the problem, 2) IF so, is there a solution? Is there some way to put a delay in fstab? Or how might i write a script to run mount -a a certain amount of time after boot? Found the option _netdev from a little research, included that in fstab but still the same result. Thanks for your help.

    Read the article

  • SQL SERVER – Get Schema Name from Object ID using OBJECT_SCHEMA_NAME

    - by pinaldave
    Sometime a simple solution have even simpler solutions but we often do not practice it as we do not see value in it or find it useful. Well, today’s blog post is also about something which I have seen not practiced much in codes. We are so much comfortable with alternative usage that we do not feel like switching how we query the data. I was going over forums and I noticed that at one place user has used following code to get Schema Name from ObjectID. USE AdventureWorks2012 GO SELECT s.name AS SchemaName, t.name AS TableName, s.schema_id, t.OBJECT_ID FROM sys.Tables t INNER JOIN sys.schemas s ON s.schema_id = t.schema_id WHERE t.name = OBJECT_NAME(46623209) GO Before I continue let me say I do not see anything wrong with this script. It is just fine and one of the way to get SchemaName from Object ID. However, I have been using function OBJECT_SCHEMA_NAME to get the schema name. If I have to write the same code from the beginning I would have written the same code as following. SELECT OBJECT_SCHEMA_NAME(46623209) AS SchemaName, t.name AS TableName, t.schema_id, t.OBJECT_ID FROM sys.tables t WHERE t.name = OBJECT_NAME(46623209) GO Now, both of the above code give you exact same result. If you remove the WHERE condition it will give you information of all the tables of the database. Now the question is which one is better – honestly – it is not about one is better than other. Use the one which you prefer to use. I prefer to use second one as it requires less typing. Let me ask you the same question to you – which method to get schema name do yo use? and Why? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL System Table, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SharePoint Saturday DC 2010 Slides, Demo Scripts, and Pictures

    - by Brian Jackett
    Wow! This past weekend I attended SharePoint Saturday Washington DC (SPSDC) which was quite an event to say the least.  For those unfamiliar, SharePoint Saturday is a community driven event where various speakers gather to present at a FREE conference on all topics related to SharePoint.  This made my fifth SharePoint Saturday attended and fourth I’ve spoken at.  SPSDC was a bit different than most SharePoint Saturdays mostly due to the scale of it.  We had almost 950 attendees, over 80 speakers presenting close to 90 sessions, and dozens of sponsors.  A big thanks goes out to the organizers of this event.  They put in a lot of hard work and time to pull this event off and should be very proud of the end result.      For SPSDC I presented “The Power of PowerShell + SharePoint 2007”.  I want to thank all of the attendees of my session for coming and asking some great questions.  Below you can find the slides and demo scripts for this session.  I also took some photos throughout the day (not as many as usual since so much going on) so check them out.  If you have any follow up questions feel free to drop me a line in the comments or the contact link at the top of the site.   Slides and Scripts Click here for the demo scripts and slides posted on my SkyDrive. VERY IMPORTANT NOTE: One thing I forgot to mention in my presentation.  In order to run code against the SharePoint API you need to load the Microsoft.SharePoint.dll assembly first.  Run the below command on the PowerShell console line to complete that:   [void][System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SharePoint") Photos Facebook album -or- My album on Windows Live site (higher res shots). View Full Album         -Frog Out

    Read the article

  • An alternative to a video codec for storing motion changes [on hold]

    - by Andrew Simpson
    I have a 3 dimensional byte array. The 3-d array represents a jpeg image. Each channel/array represents part of the RGB spectrum. I am not interested in retaining black pixels. A black pixel is represented by this atypical arrangement: myarray[0,0,0] =0; myarray[0,0,1] =0; myarray[0,0,2] =0; So, I have flattened this 3d array out to a 1d array by doing this byte[] AFlatArray = new byte[width x height x 3] and then assigning values respective to the coordinate. But like I said I do not want black pixels. So this array has to only contain color pixels with the x,y coordinate. The result I want is to re-represent the image from the i dimension byte array that only contains non-black pixels. How do I do that? It looks like I have to store black pixels as well because of the xy coordinate system. I have tried writing to a binary file but the size of that file is greater than the jpeg file as the jpeg file is compressed. I am using c#.

    Read the article

  • Webserver not giving the correct response on CURL and other httprequest methods [migrated]

    - by Maxim
    I am trying to make a REST request to a external webserver by using this code <?php $user = 'USER'; $pass = 'PASS'; $data = "MYDATA" $ch = curl_init('URL'); curl_setopt($ch, CURLOPT_POST, true); curl_setopt($ch, CURLOPT_RETURNTRANSFER, true); curl_setopt($ch, CURLOPT_POSTFIELDS, $data); curl_setopt($ch, CURLOPT_HTTPHEADER, array( 'Content-Type: application/json', 'Content-Length: ' . strlen($data)) ); curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false); curl_setopt($ch, CURLOPT_VERBOSE, true); if(!($res = curl_exec($ch))) { echo('[cURL Failure] ' . curl_error($ch)); } curl_close($ch); echo($res); Now this is a CURL request, however i tried different methods to test my result and they all give me a 403 forbidden error response that i get from the webserver, however i do get a 200 response when i run it on any other webserver (localhost, webserver2, ...) Therefore i think there is something wrong with my webserver and it might be disallowing/caching the post parameters that i provide because sometimes it returns a 200 response but most of the times it returns the 403. This is the response i get : HTTP/1.1 403 Forbidden Accept-Ranges: bytes Content-Type: application/json; charset=UTF-8 Date: Sat, 26 Oct 2013 13:56:37 GMT Server: Restlet-Framework/2.1.3 Vary: Accept-Charset, Accept-Encoding, Accept-Language, Accept Content-Length: 77 Connection: keep-alive {"error":"ForbiddenOperationException","errorMessage":"Invalid credentials."} It says Invalid credentials however i provide the correct credentials and i can confirm them because it is working on other servers. Since this is a crucial part of my script that i use for clients to register i assume that there is something wrong with the post parameters. I am running cpanel and uninstalled the following already: - varnish - apachebooster i also recompiled php already and enabled curl and its dependencies but nothing seems to resolve my problem. If more information is required then don't hesitate to ask me in the comments i will respond very quickly as i really need this. any help is appreciated. Kind regards Maxim

    Read the article

  • SQL query. An unusual join. DB implemented in sqlite-3

    - by user02814
    This is essentially a question about constructing an SQL query. The db is implemented with sqlite3. I am a relatively new user of SQL. I have two tables and want to join them in an unusual way. The following is an example to explain the problem. Table 1 (t1): id year name ------------------------- 297 2010 Charles 298 2011 David 300 2010 Peter 301 2011 Richard Table 2 (t2) id year food --------------------------- 296 2009 Bananas 296 2011 Bananas 297 2009 Melon 297 2010 Coffee 297 2012 Cheese 298 2007 Sugar 298 2008 Cereal 298 2012 Chocolate 299 2000 Peas 300 2007 Barley 300 2011 Beans 300 2012 Chickpeas 301 2010 Watermelon I want to join the tables on id and year. The catch is that (1) id must match exactly, but if there is no exact match in Table 2 for the year in Table 1, then I want to choose the year that is the next (lower) available. A selection of the kind that I want to produce would give the following result id year matchyr name food ------------------------------------------------- 297 2010 2010 Charles Coffee 298 2011 2008 David Cereal 300 2010 2007 Peter Barley 301 2011 2010 Richard Watermelon To summarise, id=297 had an exact match for year=2010 given in Table 1, so the corresponding line for id=297, year=2010 is chosen from Table 2. id=298, year=2011 did not have a matching year in Table 2, so the next available year (less than 2011) is chosen. As you can see, I would also like to know what that matched year (whether exactly , or inexactly) actually was. I would very much appreciate (1) an indication (yes/no answer) of whether this is possible to do in SQL alone, or whether I need to look outside SQL, and (2) a solution, if that is not too onerous.

    Read the article

  • icacls in windows 7 does not give full permission to write files in root drive

    - by Menuta
    icacls in windows 7 does not give full permission to write files in root drive. We have a very old application based on Omnis7 that needs to create and read/write files on drive C: when running as a restricted user. In Windows XP to give this permission is quite trivial using cacls. cacls C:\ /G Everyone:(C) The equivalent icacls in Windows 7 will not work. icacls C:\ /Grant Everyone:(M) I have also tried the following. icacls C:\ /Grant Everyone:(F) icacls C:\ /Grant Domain\user:(F) trying to create file with a restricted user gives this C:\>copy nul text.txt Access is denied. 0 file(s) copied. After applying the icacls permissions above the result changes to this. C:\>copy nul text.txt A required privilege is not held by the client. 0 file(s) copied. Is this an issue with the way I am applying the permissions? Or is Window 7 being extremely strict?

    Read the article

  • Google doesn't seem to update the description or title of my homepage

    - by Dayson
    Before we launched our website, we had set up a "coming soon" page and google picked up the title and description from its contents. So the description in the search results said, "Coming soon! Visit epicwhale.org for updates." It's been a few weeks since we launched our website. We've even created a sitemap and submitted it to google. In the google webmaster panel, the pages have been crawled and all the pages are appearing as expected on google, EXCEPT the homepage which is still not updated! The title and description of the homepage in google search results still says coming soon.. The website I am referring to is textmewidget.com and below are the images of the search result. Google: http://i.imgur.com/vAkJg.png I checked on bing too, but it appears to be fine there. Bing: http://i.imgur.com/Q8O6L.png All other pages seem to be indexed fine on google. I don't even have any crawl errors in my reports. So what seems to be the problem? I've already waited for 2 weeks. Thanks in advance!

    Read the article

  • External USB hard-drive changing drive letter

    - by Sydius
    I have a Seagate FreeAgent Go external USB hard drive that was mounted but mysteriously decided to reconnect itself: Sep 30 15:07:06 feinman kernel: [243901.551604] usb 1-1.2: USB disconnect, device number 3 Sep 30 15:07:06 feinman kernel: [243901.553828] sd 6:0:0:0: [sdb] Synchronizing SCSI cache Sep 30 15:07:06 feinman kernel: [243901.553893] sd 6:0:0:0: [sdb] Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK Sep 30 15:07:10 feinman kernel: [243905.336557] usb 1-1.2: new high-speed USB device number 4 using ehci_hcd Sep 30 15:07:10 feinman kernel: [243905.431219] scsi7 : usb-storage 1-1.2:1.0 Sep 30 15:07:11 feinman kernel: [243906.427207] scsi 7:0:0:0: Direct-Access Seagate FreeAgent Go 0148 PQ: 0 ANSI: 4 Sep 30 15:07:11 feinman kernel: [243906.428303] sd 7:0:0:0: Attached scsi generic sg1 type 0 Sep 30 15:07:11 feinman kernel: [243906.430317] sd 7:0:0:0: [sdc] 625142447 512-byte logical blocks: (320 GB/298 GiB) Sep 30 15:07:11 feinman kernel: [243906.430860] sd 7:0:0:0: [sdc] Write Protect is off Sep 30 15:07:11 feinman kernel: [243906.430865] sd 7:0:0:0: [sdc] Mode Sense: 1c 00 00 00 Sep 30 15:07:11 feinman kernel: [243906.431386] sd 7:0:0:0: [sdc] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Sep 30 15:07:11 feinman kernel: [243906.493674] sdc: sdc1 Sep 30 15:07:11 feinman kernel: [243906.496109] sd 7:0:0:0: [sdc] Attached SCSI disk It changed from sdb to sdc, causing a number of problems for me. What can I do to further track down the cause? I thought it might be a problem with it sleeping but when I cat /sys/class/scsi_disk/6\:0\:0\:0/allow_restart, I see that it's already 1.

    Read the article

  • Teredo - how to connect to host behind NAT?

    - by Signum
    All I want to achieve is to establish connection to my simple server (written in C# using TcpListener class, if it makes any difference), on my computer which is behind NAT. It has some IPv6 address (it's public IP, starting with 2001:0) on Teredo interface. However, I cannot even ping it from outside my network, for instance I'm trying to ping this address from this website http://mebsd.com/ipv6-ping-and-traceroute, result - 100% packet loss. As I understood from reading about Teredo, there is no need for some port forwarding? So where could be the problem?

    Read the article

  • SharePoint 2010 PowerShell Script to Find All SPShellAdmins with Database Name

    - by Brian Jackett
    Problem     Yesterday on Twitter my friend @cacallahan asked for some help on how she could get all SharePoint 2010 SPShellAdmin users and the associated database name.  I spent a few minutes and wrote up a script that gets this information and decided I’d post it here for others to enjoy.     Background     The Get-SPShellAdmin commandlet returns a listing of SPShellAdmins for the given database Id you pass in, or the farm configuration database by default.  For those unfamiliar, SPShellAdmin access is necessary for non-admin users to run PowerShell commands against a SharePoint 2010 farm (content and configuration databases specifically).  Click here to read an excellent guest post article my friend John Ferringer (twitter) wrote on the Hey Scripting Guy! blog regarding granting SPShellAdmin access.  Solution     Below is the script I wrote (formatted for space and to include comments) to provide the information needed. Click here to download the script.   # declare a hashtable to store results $results = @{}   # fetch databases (only configuration and content DBs are needed) $databasesToQuery = Get-SPDatabase | Where {$_.Type -eq 'Configuration Database' -or $_.Type -eq 'Content Database'}   # for each database get spshelladmins and add db name and username to result $databasesToQuery | ForEach-Object {$dbName = $_.Name; Get-SPShellAdmin -database $_.id | ForEach-Object {$results.Add($dbName, $_.username)}}   # sort results by db name and pipe to table with auto sizing of col width $results.GetEnumerator() | Sort-Object -Property Name | ft -AutoSize     Conclusion     In this post I provided a script that outputs all of the SPShellAdmin users and the associated database names in a SharePoint 2010 farm.  Funny enough it actually took me longer to boot up my dev VM and PowerShell (~3 mins) than it did to write the first working draft of the script (~2 mins).  Feel free to use this script and modify as needed, just be sure to give credit back to the original author.  Let me know if you have any questions or comments.  Enjoy!         -Frog Out   Links PowerShell Hashtables http://technet.microsoft.com/en-us/library/ee692803.aspx SPShellAdmin Access Explained http://blogs.technet.com/b/heyscriptingguy/archive/2010/07/06/hey-scripting-guy-tell-me-about-permissions-for-using-windows-powershell-2-0-cmdlets-with-sharepoint-2010.aspx

    Read the article

  • How to Fix this specific Google "Fetch as Googlebot" error appearing on my Webmaster Tools?

    - by UXdesigner
    Good day, I'm currently finding out why I have lost all of my website's rank in google. I don't even appear in google results by the domain. But other sites do link me and they appear in the google results. I think it's all about leaving my site two months alone and finding out I had 20k in comment spam, which I completely deleted and fixed with filters and adding a new Disqus comment service. Thing is, I added my site to Google Webmaster Tools and I'm finding out several awful things. For example, when I click in Google Fetch As GoogleBot. I receive this error message below in response to my request. And I don't even know what's the real problem and how to fix it. I simply don't get it. This is what appears: Date: Wednesday, July 20, 2011 9:43:35 AM PDT Googlebot Type: Web Download Time (in milliseconds): 55 HTTP/1.1 403 Forbidden Date: Wed, 20 Jul 2011 16:43:36 GMT Server: Apache Vary: Accept-Encoding Content-Encoding: gzip Content-Length: 248 Keep-Alive: timeout=2, max=100 Connection: Keep-Alive Content-Type: text/html; charset=iso-8859-1 403 Forbidden Forbidden You don't have permission to access / on this server. Additionally, a 403 Forbidden error was encountered while trying to use an ErrorDocument to handle the request. Do you guys know anything about this problem ? I need to have Google crawl my site again. I used to have a really nice google result in the past three years. Now, there's nothing. thanks,

    Read the article

  • Object oriented EDI handling in PHP

    - by Robert van der Linde
    I'm currently starting a new sub project where I will: Retrieve the order information from our mainframe Save the order information to our web-apps' database Send the order as EDI (either D01b or D93a) Receive the order response, despatch advice and invoice messages Do all kinds of fun things with the resulting datasets. However I am struggling with my initial class designs. The order information will be retrieved from the mainframe which will result in a "AOrder" class, this isn't a problem, I am not sure about how to mold this local object into an EDI string. Should I create EDIOrder/EDIOrderResponse/etc classes with matching decorators (EDIOrderD01BDecorator, EDIOrderD93ADecorator)? Do I need builder objects or can I do: // $myOrder is instance of AOrder $myOrder->toEDIOrder(); $decorator = new EDIOrderD01BDecorator($myOrder); $edi = $decorator->getEDIString(); And it'll have to work the other way around as well. Is the following code a good way of handling this problem or should I go about this differently? $ediString = $myEDIMessageBroker->fetch(); $ediOrderResponse = EDIOrderResponse::fromString($ediString); I'm just not so sure about how I should go about designing the classes and interactions between them. Thanks for reading and helping.

    Read the article

  • Service Testing made easy with SO-Aware Test Workbench

    - by cibrax
    I happy to announce today a new addition to our SO-Aware service repository toolset, SO-Aware Test Workbench, a WPF desktop application for doing functional and load testing against existing WCF Services. This tool is completely integrated to the SO-Aware service repository, which makes configuring new load and functional tests for WCF Soap and REST services a breeze. From now on, the service repository can play a very important role in an organization by facilitating collaboration between developers and testers. Developers can create and register new services in the repository with all the related artifacts like configuration. On the other hand, Testers can just pick one of the existing services in the repository and create functional or load tests from there, with no need to deal with specific details of the service implementation, location or configuration settings. Developers and Testers can later use the result of those tests to modify the services or adjust different settings on the tests or service configuration. Gustavo Machado, one of the developers behind this project, has written an excellent post describing all the functionality that can find today in the tool. You can also see the tool in action in this Endpoint Tv episode with Jesus and Ron Jacobs.

    Read the article

  • Best practices for using namespaces in C++.

    - by Dima
    I have read Uncle Bob's Clean Code a few months ago, and it has had a profound impact on the way I write code. Even if it seemed like he was repeating things that every programmer should know, putting them all together and putting them into practice does result in much cleaner code. In particular, I found breaking up large functions into many tiny functions, and breaking up large classes into many tiny classes to be incredibly useful. Now for the question. The book's examples are all in Java, while I have been working in C++ for the past several years. How would the ideas in Clean Code extend to the use of namespaces, which do not exist in Java? (Yes, I know about the Java packages, but it is not really the same.) Does it make sense to apply the idea of creating many tiny entities, each with a clearly define responsibility, to namespaces? Should a small group of related classes always be wrapped in a namespace? Is this the way to manage the complexity of having lots of tiny classes, or would the cost of managing lots of namespaces be prohibitive?

    Read the article

  • SQL Maintenance Cleanup Task Working but Not Deleting

    - by Alex
    I have a Maintenance Plan that is suppose to go through the BACKUP folder and remove all .bak older than 5 days. When I run the job, it gives me a success message but older .bak files are still present. I've tried the step at the following question: SQL Maintenance Cleanup Task 'Success' But not deleting files Result is column IsDamaged = 0 I've verified with the following question and this is not my issue: Maintenance Cleanup Task(s) running 'successfully' but not deleting back up files. I've also tried deleting the Job and Maintenance Plan and recreating, but to no avail. Any ideas?

    Read the article

  • You Don't Want to Meet Orgad Kimchi in a Dark Alley

    - by rickramsey
    source Do you remember what those bad guys in the old Charles Bronson films looked like? They looked like Orgad Kimchi, that's what they looked like. When I met him at Oracle OpenWorld 2012, I realized I didn't want to meet him in the wrong alleyway of Budapest after dark. Neither do old versions of Oracle Solaris, which Orgad bends to his will with as much ease as he probably bends stray tourists to his will in Budapest, Kandahar, or Dagestan. How Orgad Made Oracle Database Migrate from Oracle Solaris 8 to Oracle Solaris 11 In this article, which we liked so much we reprinted it from his blog (please don't tell him!), Orgad explains how he head-butted an Oracle Database into submission. The database thought it was safe running in Oracle Solaris 8, but Orgad dragged its whimpering carcas into Oracle Solaris 11. How'd he do that? Well, if you had met Orgad in person, you wouldn't ask that question. Because you'd know he could have simply stared at it, and the database would have migrated on its own. But Orgad didn't do that. Instead, he stuffed an Oracle Solaris 8 Physical-to-Virtual (P2V) Archiver Tool into his leather trench coat, the one with the special pockets sown in by the East German Secret Police for several Uzis and their ammo, and walked into his data center in a way that reminded the survivors of this clip from Matrix Reloaded. The end result? The Oracle Database 10.2 that was running on Oracle Solaris 8 is now running inside a Solaris 10 branded zone in Oracle Solaris 11. With no complaints. Don't make Orgad angry. Read his article. - Rick Website Newsletter Facebook Twitter

    Read the article

  • Import SSIS Project in Denali CTP1

    For years Analysis Services has had the ability to take an existing database from a server and reverse engineer it into a BIDS project.  This is extremely useful when all you have is the running instance of the database and the project that created it has long since disappeared.  Reverse engineering has never been a feature of SSIS until now. Let me walk you through the simple steps. The first step is that you obviously have to have a project deployed to an SSIS Catalog.  I will do a video on this soon but in case you can’t wait then my good buddy Jamie Thomson has written it up here As you can see I have a project called imaginatively “Denali1” with one package “Package.dtsx” The next thing we need to do is fire up BIDS and choose the right project type (Integration Services Import Project) Now we just follow the wizard.  We make sure we specify on which server to find the Catalog and in which folder to look for the project. Next the setting are validated and we are greeted with the familiar review screen before the creation of our new project from the deployed project happens Hit Import and away we go The result is just what we wanted.

    Read the article

  • How to fill in the network line in the ubuntu interfaces config file?

    - by matnagel
    I have to configure an ubuntu hardy server network interface. The service hoster told me that this is the network data for the machine: IP Range: 111.111.200.74 to 111.111.200.78 Netmask: 255.255.255.248 Broadcast: 111.111.200.79 Gateway: 111.111.200.73 Subnet: 111.111.200.72/29 I am only using the first IP address. I will update the /etc/hosts file with 111.111.200.74, but I am still unsure how the /etc/network/interfaces file should be. This is my plan: auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 111.111.200.74 netmask 255.255.255.248 network 111.111.200.??? broadcast 111.111.200.79 gateway 111.111.200.73 As you can see I don't know how to build the network line. How would I calculate the data for the network line and what is the result? (I changed the first 2 octets of the subnet, they are not "111.111" in the real setup.)

    Read the article

  • Doubling the DPI with a shader?

    - by Mathias Lykkegaard Lorenzen
    I'm developing a game where the map is generated with Perlin Noise, but on the CPU. I am generating some perlin noise onto a texture with a small size, and then I stretch it out to the whole screen to simulate a map. The reason for the CPU generating the noise is that I want it to look the same on all devices. Now, here's the end-result. Please ignore the bullets and the explosion on the picture. What matters is the background (the black/gray pixels) and the ground (the brown-ish pixels). They are rendered to the same texture through perlin noise. However, this doesn't look very pretty. So I was wondering if it would be possible to double the amount of pixels using a shader, and rounding edges at the same time? In other words, improve the DPI. I'm using SharpDX with DirectX 11, through its toolkit feature. But any help that'll lead me in the right direction (for instance through HLSL) would be a great help. Thanks in advance.

    Read the article

  • Mysql Error 2002 (HY000) on Snow Leopard

    - by Ole Media
    My boss update my computer to Snow Leopard, after the update we had a set back and deleted a few files/folders, since then is just nightmare after another one. I finally getting things back but I'm still having problems with MySQL. This is what I did: Deleted ALL of mysql files/folders Download and installed the packages mysql-5.1.45-osx10.6-x86_64.dmg installed the Startup item and the preferences panel After the above, I tried to start MySQL from the preferences panel without luck, and running the following command from Terminal /usr/local/mysql/bin/mysql I get the following result ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/tmp/mysql.sock' (2) I looked at some other post for possible solutions, but what they does not exactly fits my problem, so I cannot find a solution. I'm new to all this and your help will be much appreciated.

    Read the article

  • How can I move mysites to a new location

    - by Bob
    I recently restored my content and was instructed to create mysites in a different location than was originally used. Now I have several users mysites in /personal. The new desired location is /mysites. From what I found in the documentation I should back them up and restore them to the new location. Here's what I've done: Backup individual site collection for user mysite stsadm -o backup -url "https://myUrl/personal/john_smith" -filename johnsmith.bkup Restore individual site collection for user mysite stsadm -o restore -url "https://myUrl/mysites/john_smith" -filename johnsmith.bkup -overwrite The result of this and the problem is when i enumerate sites i end up with this: <Site Url="https://myUrl/mysites" Owner="domainname\john.smith" ContentDatabase="WSS_Content_MySites" StorageUsedMB="1.6" StorageWarningMB="90000" StorageMaxMB="100000" /> it leaves off the username part of the url. and if I restore more that one they want to overwrite each other.

    Read the article

  • Can't star SSH on Ubuntu 12.10 AWS EC2

    - by Conor H
    So i've just started playing around with Ubuntu on Amazon EC2. I've just issued the following command to restart ssh but it has now "killed" ssh. sudo /etc/init.d/ssh restart I can't seem to ssh to this instance anymore. Putty just gives me "connection refused". NOTE: In this case I just restarted SSH to see the result. I didn't change any settings. This was to confirm that it was the restart command was the problem and not any configs I made. What is the correct way to restart SSH? P.S. That usually works on other Ubuntu boxes. Thanks. EDIT: It is also worth noting that when I ran that command I was taken straight back to a prompt. I didn't get any output on the console.

    Read the article

  • Write DAX queries in Report Builder #ssrs #dax #ssas #tabular

    - by Marco Russo (SQLBI)
    If you use Report Builder with Reporting Services, you can use DAX queries even if the editor for Analysis Services provider does not support DAX syntax. In fact, the DMX editor that you can use in Visual Studio editor of Reporting Services (see a previous post on that), is not available in Report Builder. However, as Sagar Salvi commented in this Microsoft Connect entry, you can use the DAX query text in the query of a Dataset by using the OLE DB provider instead of the Analysis Services one. I think it’s a good idea to show the steps required. First, create a DataSet using the OLE DB connection type, and provide the connection string the provider (Provider), the server name (Data Source) and the database name (Initial Catalog), such as: Provider=MSOLAP;Data Source=SERVERNAME\\TABULAR;Initial Catalog=AdventureWorks Tabular Model SQL 2012 Then, create a Dataset using the data source previously defined, select the Text query type, and write the DAX code in the Query pane: You can also use the Query Designer window, that doesn’t provide any particular help in writing the DAX query, but at least can show a preview of the result of the query execution. I hope DAX will get better editors in the future… in the meantime, remember you can use DAX Studio to write and test your DAX queries, and DAX Formatter to improve their readability!If you want to learn the DAX Query Language, I suggest you watching my video Data Analysis Expressions as a Query Language on Project Botticelli!

    Read the article

< Previous Page | 404 405 406 407 408 409 410 411 412 413 414 415  | Next Page >