Search Results

Search found 20990 results on 840 pages for 'live services'.

Page 22/840 | < Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >

  • Can Django be used for web services?

    - by alex
    My friend said, "Pylons is so much better for web services." My other friend said, "You can modify Django in a way to do exactly whatever you like." In Django, what is necessary to be modified (urls.py? models classes? settings?) in order to do "web services" with APIs and REST and versioning, etc etc.?

    Read the article

  • web services & php

    - by pareja
    Hi... I need help because I don't know about web services using php somebody help me I need to create a web services with php and I need some any reference (free) Thanks for your help

    Read the article

  • what are software and hardware requirement for building REST web services in java

    - by user1846545
    I want to build rest web services in java. can some body tell me what are the software and hardware requirements for that? I want to know it in sense like one computer, which one database and which one server and if any other because i want to use these web services globally and want to post JSON in request and also want to get response in json for an android app. thanx, any answer would be a great help for me.

    Read the article

  • Why should I prefer OSGi Services over exported packages?

    - by Jens
    Hi, I am trying to get my head around OSGi Services. The main question I keep asking myself is: What's the benefit of using services instead of working with bundles and their exported packages? As far as I know it seems the concept of Late Binding has something to do with it. Bundle dependencies are wired together at bundle start, so they are pretty fixed I guess. But with services it seems to be almost the same. A bundle starts and registers services or binds to services. Of course services can come and go whenever they want and you have to keep track of these chances. But the core idea doesn't seem that different to me. Another aspect to this seems to be that services are more flexible. There could be many implementations for one specific Interface. On the other hand there can be a lot of different implementations for a specific exported package too. In another text I read that the disadvantage of using exported packages is that they make the application more fragile than services. The author wrote that if you remove one bundle from the dependency graph other dependencies would no longer be met, thus possibly causing a domino effect on the whole graph. But couldn't the same happen if a service would go offline? To me it looks like service dependencies are no better than bundle dependencies. So far I could not find a blog post, book or presentation that could clearly describe why services are better than just exposing functionality by exporting and importing packages. To sum my questions up: What are the key benefits of using OSGi Services that make them superior to exporting and importing packages?

    Read the article

  • Move Files from a Failing PC with an Ubuntu Live CD

    - by Trevor Bekolay
    You’ve loaded the Ubuntu Live CD to salvage files from a failing system, but where do you store the recovered files? We’ll show you how to store them on external drives, drives on the same PC, a Windows home network, and other locations. We’ve shown you how to recover data like a forensics expert, but you can’t store recovered files back on your failed hard drive! There are lots of ways to transfer the files you access from an Ubuntu Live CD to a place that a stable Windows machine can access them. We’ll go through several methods, starting each section from the Ubuntu desktop – if you don’t yet have an Ubuntu Live CD, follow our guide to creating a bootable USB flash drive, and then our instructions for booting into Ubuntu. If your BIOS doesn’t let you boot using a USB flash drive, don’t worry, we’ve got you covered! Use a Healthy Hard Drive If your computer has more than one hard drive, or your hard drive is healthy and you’re in Ubuntu for non-recovery reasons, then accessing your hard drive is easy as pie, even if the hard drive is formatted for Windows. To access a hard drive, it must first be mounted. To mount a healthy hard drive, you just have to select it from the Places menu at the top-left of the screen. You will have to identify your hard drive by its size. Clicking on the appropriate hard drive mounts it, and opens it in a file browser. You can now move files to this hard drive by drag-and-drop or copy-and-paste, both of which are done the same way they’re done in Windows. Once a hard drive, or other external storage device, is mounted, it will show up in the /media directory. To see a list of currently mounted storage devices, navigate to /media by clicking on File System in a File Browser window, and then double-clicking on the media folder. Right now, our media folder contains links to the hard drive, which Ubuntu has assigned a terribly uninformative label, and the PLoP Boot Manager CD that is currently in the CD-ROM drive. Connect a USB Hard Drive or Flash Drive An external USB hard drive gives you the advantage of portability, and is still large enough to store an entire hard disk dump, if need be. Flash drives are also very quick and easy to connect, though they are limited in how much they can store. When you plug a USB hard drive or flash drive in, Ubuntu should automatically detect it and mount it. It may even open it in a File Browser automatically. Since it’s been mounted, you will also see it show up on the desktop, and in the /media folder. Once it’s been mounted, you can access it and store files on it like you would any other folder in Ubuntu. If, for whatever reason, it doesn’t mount automatically, click on Places in the top-left of your screen and select your USB device. If it does not show up in the Places list, then you may need to format your USB drive. To properly remove the USB drive when you’re done moving files, right click on the desktop icon or the folder in /media and select Safely Remove Drive. If you’re not given that option, then Eject or Unmount will effectively do the same thing. Connect to a Windows PC on your Local Network If you have another PC or a laptop connected through the same router (wired or wireless) then you can transfer files over the network relatively quickly. To do this, we will share one or more folders from the machine booted up with the Ubuntu Live CD over the network, letting our Windows PC grab the files contained in that folder. As an example, we’re going to share a folder on the desktop called ToShare. Right-click on the folder you want to share, and click Sharing Options. A Folder Sharing window will pop up. Check the box labeled Share this folder. A window will pop up about the sharing service. Click the Install service button. Some files will be downloaded, and then installed. When they’re done installing, you’ll be appropriately notified. You will be prompted to restart your session. Don’t worry, this won’t actually log you out, so go ahead and press the Restart session button. The Folder Sharing window returns, with Share this folder now checked. Edit the Share name if you’d like, and add checkmarks in the two checkboxes below the text fields. Click Create Share. Nautilus will ask your permission to add some permissions to the folder you want to share. Allow it to Add the permissions automatically. The folder is now shared, as evidenced by the new arrows above the folder’s icon. At this point, you are done with the Ubuntu machine. Head to your Windows PC, and open up Windows Explorer. Click on Network in the list on the left, and you should see a machine called UBUNTU in the right pane. Note: This example is shown in Windows 7; the same steps should work for Windows XP and Vista, but we have not tested them. Double-click on UBUNTU, and you will see the folder you shared earlier! As well as any other folders you’ve shared from Ubuntu. Double click on the folder you want to access, and from there, you can move the files from the machine booted with Ubuntu to your Windows PC. Upload to an Online Service There are many services online that will allow you to upload files, either temporarily or permanently. As long as you aren’t transferring an entire hard drive, these services should allow you to transfer your important files from the Ubuntu environment to any other machine with Internet access. We recommend compressing the files that you want to move, both to save a little bit of bandwidth, and to save time clicking on files, as uploading a single file will be much less work than a ton of little files. To compress one or more files or folders, select them, and then right-click on one of the members of the group. Click Compress…. Give the compressed file a suitable name, and then select a compression format. We’re using .zip because we can open it anywhere, and the compression rate is acceptable. Click Create and the compressed file will show up in the location selected in the Compress window. Dropbox If you have a Dropbox account, then you can easily upload files from the Ubuntu environment to Dropbox. There is no explicit limit on the size of file that can be uploaded to Dropbox, though a free account begins with a total limit of 2 GB of files in total. Access your account through Firefox, which can be opened by clicking on the Firefox logo to the right of the System menu at the top of the screen. Once into your account, press the Upload button on top of the main file list. Because Flash is not installed in the Live CD environment, you will have to switch to the basic uploader. Click Browse…find your compressed file, and then click Upload file. Depending on the size of the file, this could take some time. However, once the file has been uploaded, it should show up on any computer connected through Dropbox in a matter of minutes. Google Docs Google Docs allows the upload of any type of file – making it an ideal place to upload files that we want to access from another computer. While your total allocation of space varies (mine is around 7.5 GB), there is a per-file maximum of 1 GB. Log into Google Docs, and click on the Upload button at the top left of the page. Click Select files to upload and select your compressed file. For safety’s sake, uncheck the checkbox concerning converting files to Google Docs format, and then click Start upload. Go Online – Through FTP If you have access to an FTP server – perhaps through your web hosting company, or you’ve set up an FTP server on a different machine – you can easily access the FTP server in Ubuntu and transfer files. Just make sure you don’t go over your quota if you have one. You will need to know the address of the FTP server, as well as the login information. Click on Places > Connect to Server… Choose the FTP (with login) Service type, and fill in your information. Adding a bookmark is optional, but recommended. You will be asked for your password. You can choose to remember it until you logout, or indefinitely. You can now browse your FTP server just like any other folder. Drop files into the FTP server and you can retrieve them from any computer with an Internet connection and an FTP client. Conclusion While at first the Ubuntu Live CD environment may seem claustrophobic, it has a wealth of options for connecting to peripheral devices, local computers, and machines on the Internet – and this article has only scratched the surface. Whatever the storage medium, Ubuntu’s got an interface for it! Similar Articles Productive Geek Tips Backup Your Windows Live Writer SettingsMove a Window Without Clicking the Titlebar in UbuntuRecover Deleted Files on an NTFS Hard Drive from a Ubuntu Live CDCreate a Bootable Ubuntu USB Flash Drive the Easy WayReset Your Ubuntu Password Easily from the Live CD TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows Tech Fanboys Field Guide Check these Awesome Chrome Add-ons iFixit Offers Gadget Repair Manuals Online Vista style sidebar for Windows 7 Create Nice Charts With These Web Based Tools Track Daily Goals With 42Goals

    Read the article

  • How to create a bootable system with a squashfs root

    - by cldfzn
    My goal is to be able to take a customized root file system loaded with the software I want. So far I've created a squashed filesystem using debootstrap and chroot to install the software I want on the system. The problem I am now running in to.. whenever I boot in to the system, my user accounts that were set up in the chroot do not work. First boot everything works out, second boot I can't log in. That is baffling to me. Any one know a reason or a place to start looking? Update To get a working system with a squashfs filesystem: sudo apt-get install live-boot live-boot-initramfs-tools extlinux sudo update-initramfs -u Create a squashfs file from a bootstrapped or running ubuntu filesystem with whatever packages you want available. https://help.ubuntu.com/community/LiveCDCustomizationFromScratch provides good instructions for creating a debootstrapped system to build on. Format the target drive with ext2/3/4 and enable the bootable flag. Create the folder layout on the target drive and install extlinux: mkdir -p ${TARGET}/boot/extlinux ${TARGET}/live extlinux -i ${TARGET}/boot/extlinux dd if=/usr/lib/syslinux/mbr.bin of=/dev/sdX #X is the drive letter cp /boot/vmlinuz-$(uname -r) ${TARGET}/boot/vmlinuz cp /boot/initrd.img-$(uname -r) ${TARGET}/boot/initrd cp filesystem.squashfs ${TARGET}/live Create ${TARGET}/boot/extlinux/extlinux.conf with the following contents: DEFAULT Live LABEL Live KERNEL /boot/vmlinuz APPEND initrd=/boot/initrd boot=live toram=filesystem.squashfs TIMEOUT 10 PROMPT 0 Now you should be able to boot from the target drive in to your squashed system.

    Read the article

  • Diagnose PC Hardware Problems with an Ubuntu Live CD

    - by Trevor Bekolay
    So your PC randomly shuts down or gives you the blue screen of death, but you can’t figure out what’s wrong. The problem could be bad memory or hardware related, and thankfully the Ubuntu Live CD has some tools to help you figure it out. Test your RAM with memtest86+ RAM problems are difficult to diagnose—they can range from annoying program crashes, or crippling reboot loops. Even if you’re not having problems, when you install new RAM it’s a good idea to thoroughly test it. The Ubuntu Live CD includes a tool called Memtest86+ that will do just that—test your computer’s RAM! Unlike many of the Live CD tools that we’ve looked at so far, Memtest86+ has to be run outside of a graphical Ubuntu session. Fortunately, it only takes a few keystrokes. Note: If you used UNetbootin to create an Ubuntu flash drive, then memtest86+ will not be available. We recommend using the Universal USB Installer from Pendrivelinux instead (persistence is possible with Universal USB Installer, but not mandatory). Boot up your computer with a Ubuntu Live CD or USB drive. You will be greeted with this screen: Use the down arrow key to select the Test memory option and hit Enter. Memtest86+ will immediately start testing your RAM. If you suspect that a certain part of memory is the problem, you can select certain portions of memory by pressing “c” and changing that option. You can also select specific tests to run. However, the default settings of Memtest86+ will exhaustively test your memory, so we recommend leaving the settings alone. Memtest86+ will run a variety of tests that can take some time to complete, so start it running before you go to bed to give it adequate time. Test your CPU with cpuburn Random shutdowns – especially when doing computationally intensive tasks – can be a sign of a faulty CPU, power supply, or cooling system. A utility called cpuburn can help you determine if one of these pieces of hardware is the problem. Note: cpuburn is designed to stress test your computer – it will run it fast and cause the CPU to heat up, which may exacerbate small problems that otherwise would be minor. It is a powerful diagnostic tool, but should be used with caution. Boot up your computer with a Ubuntu Live CD or USB drive, and choose to run Ubuntu from the CD or USB drive. When the desktop environment loads up, open the Synaptic Package Manager by clicking on the System menu in the top-left of the screen, then selecting Administration, and then Synaptic Package Manager. Cpuburn is in the universe repository. To enable the universe repository, click on Settings in the menu at the top, and then Repositories. Add a checkmark in the box labeled “Community-maintained Open Source software (universe)”. Click close. In the main Synaptic window, click the Reload button. After the package list has reloaded and the search index has been rebuilt, enter “cpuburn” in the Quick search text box. Click the checkbox in the left column, and select Mark for Installation. Click the Apply button near the top of the window. As cpuburn installs, it will caution you about the possible dangers of its use. Assuming you wish to take the risk (and if your computer is randomly restarting constantly, it’s probably worth it), open a terminal window by clicking on the Applications menu in the top-left of the screen and then selection Applications > Terminal. Cpuburn includes a number of tools to test different types of CPUs. If your CPU is more than six years old, see the full list; for modern AMD CPUs, use the terminal command burnK7 and for modern Intel processors, use the terminal command burnP6 Our processor is an Intel, so we ran burnP6. Once it started up, it immediately pushed the CPU up to 99.7% total usage, according to the Linux utility “top”. If your computer is having a CPU, power supply, or cooling problem, then your computer is likely to shutdown within ten or fifteen minutes. Because of the strain this program puts on your computer, we don’t recommend leaving it running overnight – if there’s a problem, it should crop up relatively quickly. Cpuburn’s tools, including burnP6, have no interface; once they start running, they will start driving your CPU until you stop them. To stop a program like burnP6, press Ctrl+C in the terminal window that is running the program. Conclusion The Ubuntu Live CD provides two great testing tools to diagnose a tricky computer problem, or to stress test a new computer. While they are advanced tools that should be used with caution, they’re extremely useful and easy enough that anyone can use them. Similar Articles Productive Geek Tips Reset Your Ubuntu Password Easily from the Live CDCreate a Persistent Bootable Ubuntu USB Flash DriveAdding extra Repositories on UbuntuHow to Share folders with your Ubuntu Virtual Machine (guest)Building a New Computer – Part 3: Setting it Up TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Have Fun Editing Photo Editing with Citrify Outlook Connector Upgrade Error Gadfly is a cool Twitter/Silverlight app Enable DreamScene in Windows 7 Microsoft’s “How Do I ?” Videos Home Networks – How do they look like & the problems they cause

    Read the article

  • Oracle Unveils Industry’s Broadest Cloud Strategy

    - by kellsey.ruppel
    Oracle Unveils Industry’s Broadest Cloud Strategy Adds Social Cloud and Showcases early customers Redwood Shores, Calif. – June 6, 2012 “Almost seven years of relentless engineering and innovation plus key strategic acquisitions. An investment of billions. We are now announcing the most comprehensive Cloud on the planet Earth,” said Oracle CEO, Larry Ellison. “Most cloud vendors only have niche assets. They don’t have platforms to extend. Oracle is the only vendor that offers a complete suite of modern, socially-enabled applications, all based on a standards-based platform.” News Facts In a major strategy update today, Larry Ellison announced the industry’s broadest and most advanced Cloud strategy and introduced Oracle Cloud Social Services, a broad Enterprise Social Platform offering. Oracle Cloud delivers a broad set of industry-standards based, integrated services that provide customers with subscription-based access to Oracle Platform Services, Application Services, and Social Services, all completely managed, hosted and supported by Oracle. Offering a wide range of business applications and platform services, the Oracle Cloud is the only cloud to enable customers to avoid the data and business process fragmentation that occurs when using multiple, siloed public clouds. Oracle Cloud is powered by leading enterprise-grade infrastructure, including Oracle Exadata and Oracle Exalogic, providing customers and partners with a high-performance, reliable, and secure infrastructure for running critical business applications. Oracle Cloud enables easy self-service for both business users and developers. Business users can order, configure, extend, and monitor their applications. Developers and administrators can easily develop, deploy, monitor and manage their applications. As part of the event, Oracle also showcased several early Oracle Cloud customers and partners including system integrators and independent software vendors. Oracle Cloud Platform Services Built on a common, complete, standards-based and enterprise-grade set of infrastructure components, Oracle Cloud Platform Services enable customers to speed time to market and lower costs by quickly building, deploying and managing bespoke applications. Oracle Cloud Platform Services will include: Database Services to manage data and build database applications with the Oracle Database. Java Services to develop, deploy and manage Java applications with Oracle WebLogic. Developer Services to allow application developers to collaboratively build applications. Web Services to build Web applications rapidly using PHP, Ruby, and Python. Mobile Services to allow developers to build cross-platform native and HTML5 mobile applications for leading smartphones and tablets. Documents Services to allow project teams to collaborate and share documents through online workspaces and portals. Sites Services to allow business users to develop and maintain visually engaging .com sites Analytics Services to allow business users to quickly build and share analytic dashboards and reports through the Cloud. Oracle Cloud Application Services Oracle Cloud Application Services provides customers access to the industry’s broadest range of enterprise applications available in the cloud today, with built-in business intelligence, social and mobile capabilities. Easy to setup, configure, extend, use and administer, Oracle Cloud Application Services will include: ERP Services: A complete set of Financial Accounting, Project Management, Procurement, Sourcing, and Governance, Risk & Compliance solutions. HCM Services: A complete Human Capital Management solution including Global HR, Workforce Lifecycle Management, Compensation, Benefits, Payroll and other solutions. Talent Management Services: A complete Talent Management solution including Recruiting, Sourcing, Performance Management, and Learning. Sales and Marketing Services: A complete Sales and Marketing solution including Sales Planning, Territory Management, Leads & Opportunity Management, and Forecasting. Customer Experience Services: A complete Customer Service solution including Web Self-Service, Contact Centers, Knowledge Management, Chat, and e-mail Management. Oracle Cloud Social Services Oracle Cloud Social Services provides the most broad and complete enterprise social platform available in the cloud today.  With Oracle Cloud Social Services, enterprises can engage with their customers on a range of social media properties in a comprehensive and meaningful fashion including social marketing, commerce, service and listening. The platform also provides enterprises with a rich social networking solution for their employees to collaborate effectively inside the enterprise. Oracle’s integrated social platform will include: Oracle Social Network to enable secure enterprise collaboration and purposeful social networking for business. Oracle Social Data Services to aggregate data from social networks and enterprise data sources to enrich business applications. Oracle Social Marketing and Engagement Services to enable marketers to centrally create, publish, moderate, manage, measure and report on their social marketing campaigns. Oracle Social Intelligence Services to enable marketers to analyze social media interactions and to enable customer service and sales teams to engage with customers and prospects effectively. Supporting Resources Oracle Cloud – learn more cloud.oracle.com – sign up now Webcast – watch the replay About Oracle Oracle engineers hardware and software to work together in the cloud and in your data center. For more information about Oracle (NASDAQ:ORCL), visit www.oracle.com. TrademarksOracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

    Read the article

  • CodePlex learns to talk to other services!

    CodePlex is now able to talk to other services! For example, if you want CodePlex to tell Trello to update cards on your Trello board, it can do it. Or if you want CodePlex to notify your Campfire chat room when updates are pushed, it can do that too. To start off, we are going to be adding support for the following services: Campfire – Notify a Campfire chat room when commits occur HipChat – Notify a HipChat chat room when commits occur Trello – Add commit summaries to Trello cards by referencing those cards in commit messages Twitter – Notify your Twitter followers when updates are pushed to your project In addition, we will continue to support our existing integrations with Windows Azure – Continuously deploy to Windows Azure on pushes (For Git and Hg projects) AppHarbor – Continuously deploy to AppHarbor on pushes To set up these integrations for your project, navigate to the project settings page as a project coordinator, and click on the services section as seen below:   While we are starting with these six services, the infrastructure is now in place to allow us to quickly roll out new integrations. We would love to hear which services and integrations you would like to see most on our suggestions page. We realize that there are some services and URLs that only make sense for your project to send notifications to. To support this scenario, we plan to add generic web hooks in the near future. Have ideas on how to improve CodePlex? Please visit our suggestions page! Vote for existing ideas or submit a new one. As always you can reach out to the CodePlex team on Twitter @codeplex or reach me directly @Rick_Marron.    

    Read the article

  • WCF Services (with RIA)

    - by netlogging
    I am new to WCF and WCF derived services. I am using VS 2010, silverlight 4, ria services 4. Recently I created plain WCF REST services (no RIA, no SOAP) with my endpoint (using wsHttpBinging): <endpoint address="" behaviorConfiguration="wsBehavior" binding="wsHttpBinding" bindingConfiguration="wsbinding" contract="WcfService1.IService1"/> <behaviors> <endpointBehaviors> <behavior name="wsBehavior"> <webHttp/> </behavior>......... I use this service from silverlight 4 client and everything works fine. THEN, i created new project using "silverlight Business application" template which used RIA service. Now the web.config uses DomainServices and when i add wsHttpBind endpoint I doesnot work. I know i am not doing this correctly and i cant find any help online so far. What I am trying to do is creat a RESTful WCF application with RIA (no SOAP) and that i can use from silverlight 4 client. For some reason i cannot get the service working.

    Read the article

  • Customize a WCF RIA Services Endpoint

    - by Andrew Garrison
    Is it possible to customize the parameters of a WCF RIA Services endpoint? Specifically, I would like to create a custom binding for the endpoint and increase the maxReceivedMessageSize to allow sending the contents of a file that is a few megabytes in size. I've tried meddling in the web.config, but I'm getting the following error: [InvalidOperationException]: The contract name MyNamespace.MyService could not be found in the list of contracts implemented by the service MyNamespace.MyService web.config <system.serviceModel> <bindings> <customBinding> <binding name="CustomBinaryHttpBinding"> <binaryMessageEncoding /> <httpTransport maxReceivedMessageSize="2147483647" maxBufferSize="2147483647" /> </binding> </customBinding> </bindings> <services> <service name="MyNamespace.MyService"> <endpoint address="" binding="wsHttpBinding" contract="MyNamespace.MyService" /> <endpoint address="/binary" binding="customBinding" bindingConfiguration="CustomBinaryHttpBinding" contract="MyNamespace.MyService" /> </service> </services> <serviceHostingEnvironment aspNetCompatibilityEnabled="true" multipleSiteBindingsEnabled="true" /> </system.serviceModel>

    Read the article

  • Customize a WCF RIA Services EndpointRSS Feed

    - by Andrew Garrison
    Is it possible to customize the parameters of a WCF RIA Services endpoint? Specifically, I would like to create a custom binding for the endpoint and increase the maxReceivedMessageSize to allow sending the contents of a file that is a few megabytes in size. I've tried meddling in the web.config, but I'm getting the following error: [InvalidOperationException]: The contract name MyNamespace.MyService could not be found in the list of contracts implemented by the service MyNamespace.MyService web.config <system.serviceModel> <bindings> <customBinding> <binding name="CustomBinaryHttpBinding"> <binaryMessageEncoding /> <httpTransport maxReceivedMessageSize="2147483647" maxBufferSize="2147483647" /> </binding> </customBinding> </bindings> <services> <service name="MyNamespace.MyService"> <endpoint address="" binding="wsHttpBinding" contract="MyNamespace.MyService" /> <endpoint address="/binary" binding="customBinding" bindingConfiguration="CustomBinaryHttpBinding" contract="MyNamespace.MyService" /> </service> </services> <serviceHostingEnvironment aspNetCompatibilityEnabled="true" multipleSiteBindingsEnabled="true" /> </system.serviceModel>

    Read the article

  • How to Browse Without a Trace with an Ubuntu Live CD

    - by Trevor Bekolay
    No matter how diligently you clear your cache and erase your history, web browsing leaves traces on your computer. If you need keep your browsing private, then an Ubuntu Live CD is the answer. The key to this trick is that the Live CD environment runs completely in RAM, so things like your cache, cookies, and history don’t get saved to a persistent storage location. On a hard drive, even deleted files can be recovered, but once a computer is turned off the data stored in RAM is unrecoverable. In addition, since the Ubuntu Live CD environment is the same no matter what computer you use it on, there’s very little identifying information that a website can use to track you! The first step is to either burn an Ubuntu Live CD, or prepare a non-persistent Ubuntu USB flash drive. Ubuntu treats non-persistent flash drives like CDs, so files will not be written to it, but if you’re paranoid, then using a physical CD ensures that nothing gets written to a storage device. Boot up from the CD or flash drive, and choose to Run Ubuntu from the CD or flash drive if prompted (for more detailed instructions on booting from a CD or USB drive, see this article, or our guide on booting from a flash drive even if your BIOS won’t let you). Once the graphical Ubuntu environment comes up, you can click on the Firefox icon at the top of the screen to start browsing. If your browsing requires Flash, then you can install it by clicking on System at the top-left of the screen, then Administration > Synaptic Package Manager. Click on Settings at the top of the Synaptic window, and then select Repositories. Add a check in the checkbox with the label ending in “multiverse”. Click Close. Click the Reload button in the main Synaptic window. The list of available packages will reload. When they’ve reloaded, type “restricted” in the Quick search box. Right-click on ubuntu-restricted-extras and select Mark for Installation. It will note a number of other packages that will be installed. This list includes audio and video codecs, so after installing these, you should be able to play downloaded movies and songs. Click Mark to accept the installation of these other packages. Once you return to the main Synaptic window, click the Apply button and go through the dialogs to finish the installation of Flash and the other useful packages. If you open up Firefox now, you’ll have no problems using websites that use Flash. When you’re done browsing and shut down or restart your computer, all traces of your web browsing will be gone. It’s a bit of work compared to just using a privacy-centric browser, but if it’s very important that your browsing leave no traces on your hard drive, an Ubuntu Live CD is your best bet. Download Ubuntu Similar Articles Productive Geek Tips Reset Your Ubuntu Password Easily from the Live CDAdding extra Repositories on UbuntuHow to Add a Program to the Ubuntu Startup List (After Login)How to install Spotify in Ubuntu 9.10 using WineInstalling PHP4 and Apache on Ubuntu TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 2010 World Cup Schedule Boot Snooze – Reboot and then Standby or Hibernate Customize Everything Related to Dates, Times, Currency and Measurement in Windows 7 Google Earth replacement Icon (Icons we like) Build Great Charts in Excel with Chart Advisor tinysong gives a shortened URL for you to post on Twitter (or anywhere)

    Read the article

  • Clone a Hard Drive Using an Ubuntu Live CD

    - by Trevor Bekolay
    Whether you’re setting up multiple computers or doing a full backup, cloning hard drives is a common maintenance task. Don’t bother burning a new boot CD or paying for new software – you can do it easily with your Ubuntu Live CD. Not only can you do this with your Ubuntu Live CD, you can do it right out of the box – no additional software needed! The program we’ll use is called dd, and it’s included with pretty much all Linux distributions. dd is a utility used to do low-level copying – rather than working with files, it works directly on the raw data on a storage device. Note: dd gets a bad rap, because like many other Linux utilities, if misused it can be very destructive. If you’re not sure what you’re doing, you can easily wipe out an entire hard drive, in an unrecoverable way. Of course, the flip side of that is that dd is extremely powerful, and can do very complex tasks with little user effort. If you’re careful, and follow these instructions closely, you can clone your hard drive with one command. We’re going to take a small hard drive that we’ve been using and copy it to a new hard drive, which hasn’t been formatted yet. To make sure that we’re working with the right drives, we’ll open up a terminal (Applications > Accessories > Terminal) and enter in the following command sudo fdisk –l We have two small drives, /dev/sda, which has two partitions, and /dev/sdc, which is completely unformatted. We want to copy the data from /dev/sda to /dev/sdc. Note: while you can copy a smaller drive to a larger one, you can’t copy a larger drive to a smaller one with the method described below. Now the fun part: using dd. The invocation we’ll use is: sudo dd if=/dev/sda of=/dev/sdc In this case, we’re telling dd that the input file (“if”) is /dev/sda, and the output file (“of”) is /dev/sdc. If your drives are quite large, this can take some time, but in our case it took just less than a minute. If we do sudo fdisk –l again, we can see that, despite not formatting /dev/sdc at all, it now has the same partitions as /dev/sda.  Additionally, if we mount all of the partitions, we can see that all of the data on /dev/sdc is now the same as on /dev/sda. Note: you may have to restart your computer to be able to mount the newly cloned drive. And that’s it…If you exercise caution and make sure that you’re using the right drives as the input file and output file, dd isn’t anything to be scared of. Unlike other utilities, dd copies absolutely everything from one drive to another – that means that you can even recover files deleted from the original drive in the clone! Similar Articles Productive Geek Tips Reset Your Ubuntu Password Easily from the Live CDHow to Browse Without a Trace with an Ubuntu Live CDRecover Deleted Files on an NTFS Hard Drive from a Ubuntu Live CDCreate a Bootable Ubuntu 9.10 USB Flash DriveWipe, Delete, and Securely Destroy Your Hard Drive’s Data the Easy Way TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 Windows Media Player Glass Icons (icons we like) How to Forecast Weather, without Gadgets Outlook Tools, one stop tweaking for any Outlook version Zoofs, find the most popular tweeted YouTube videos Video preview of new Windows Live Essentials 21 Cursor Packs for XP, Vista & 7

    Read the article

  • Using Live Data in Database Development Work

    - by Phil Factor
    Guest Editorial for Simple-Talk Newsletter... in which Phil Factor reacts with some exasperation when coming across a report that a majority of companies were still using financial and personal data for both developing and testing database applications. If you routinely test your development work using real production data that contains personal or financial information, you are probably being irresponsible, and at worst, risking a heavy financial penalty for your company. Surprisingly, over 80% of financial companies still do this. Plenty of data breaches and fraud have happened from the use of real data for testing, and a data breach is a nightmare for any organisation that suffers one. The cost of each data breach averages out at around $7.2 million in the US in notification, escalation, credit monitoring, fines, litigation, legal costs, and lost business due to customer churn, £1.9 million in the UK. 70% of data breaches are done from within the organisation. Real data can be exploited in a number of ways for malicious or criminal purposes. It isn't just the obvious use of items such as name and address, date of birth, social security number, and credit card and bank account numbers: Data can be exploited in many subtle ways, so there are excellent reasons to ensure that a high priority is given to the detection and prevention of any data breaches. You'll never successfully guess all the ways that real data can be exploited maliciously, or the ease with which it can be accessed. It would be silly to argue that developers never need access to a copy of the database containing live data. Developers sometimes need to track a bug that can only be replicated on the data from the live database. However, it has to be done in a very restrictive harness. The law makes no distinction between development and production databases when a data breach occurs, so the data has to be held with all appropriate security measures in place. In Europe, the use of personal data for testing requires the explicit consent of the people whose data is being held. There are federal standards such as GLBA, PCI DSS and HIPAA, and most US States have privacy legislation. The task of ensuring compliance and tight security in such circumstances is an expensive and time-consuming overhead. The developer is likely to suffer investigation if a data breach occurs, even if the company manages to stay in business. Ironically, the use of copies of live data isn't usually the most effective way to develop or test your data. Data is usually time-specific and isn't usually current by the time it is used for testing, Existing data doesn't help much for new functionality, and every time the data is refreshed from production, any test data is likely to be overwritten. Also, it is not always going to test all the 'edge' conditions that are likely to flush out bugs. You still have the task of simulating the dynamics of actual usage of the database, and here you have no alternative to creating 'spoofed' data. Because of the complexities of relational data, It used to be that there was no realistic alternative to developing and testing with live data. However, this is no longer the case. Real data can be obfuscated, or it can be created entirely from scratch. The latter process used to be impractical, now that there are plenty of third-party tools to choose from. The process of obfuscation isn't risk free. The process must access the live data, and the success of the obfuscation process has to be carefully monitored. Database data security isn't an exciting topic to you or I, but to a hacker it can be an all-consuming obsession, especially if there is financial or political gain involved. This is not the sort of adversary one would wish for and it is far better to accept, and work with, security restrictions that exist for using live data in database development work, especially when the tools exist to create large realistic database test data that can be better for several aspects of testing.

    Read the article

  • 12.10 unable to install or even run from Live CD with nVidia GTX 580

    - by user99056
    I've used Ubuntu in the past (set up as web server, etc over in Iraq), so I'm not a 100% Linux Noob, however, I'm running into a brick wall here. I've got a machine I built when I got back to the US earlier this year, running Windows 7 Ultimate on it, and I've now got some free time and would like to transition over to Ubuntu full time. I've searched around in the forums, and there seems to be an issue with the nVidia graphics cards, so I've tried going to the EVGA site to see if I could find a new BIOS update for it and had no luck, so I'm back searching the forums here again and decided to just go ahead and post my question. My apologies if this is covered in another post and I was just unable to find it. I've found a few 'similar' posts, but nothing as bad as my issue. With the history aside, here is the actual detailed issue: I purchased a new SSD (Intel 520 SSD), arrived today, and I disconnect my old Windows 7 SSD. I had pre downloaded the ubuntu-12.10-desktop-amd64 earlier today and burned it to DVD. Upon inserting the Live CD into the computer and booting up, everything was fine up to the 'Run From Live CD' or 'Install Ubuntu Now' buttons. As I was sure I wanted to go ahead and make the switch, I selected the 'Install Now' from the right hand side. CD Spins up, black window pops up, and then the errors started: date/time GPU Lockup date/time Failed to idle channel 1 date/time PFIFO - playlist update failed date/time Failed to idle channel 2 date/time PFIFO - playlist update failed Thinking it might correct itself, I let it run and it would swap over to a GUI Screen that was locked up with major blurring/etc, then back to the command line with the errors. Eventually it said something along the lines of 'unknown status' and switched back to the GUI and froze. So, that's when I tried to see if I could find a BIOS upgrade for the nVidia GTX580 cards, and had no luck. So I thought, why not try to just run it from the Live CD and see if I can at least get a look at it, maybe if I could get it running try to do some sort of install from there and fix the driver issue. I rebooted, brought up the Live CD, and this time chose the left option / run from the CD. It brought me all the way in to the desktop, I saw my drives, the other icons, could move the mouse, etc for about 30 seconds and then it locked up completely. I've tried this a couple of times and get the same results every time. Hardware: Intel i7-3930K CPU @ 3.2GHz (12 CPUs) / MSI MS-7760 Motherboard / 32GB RAM / 2 x EVGA (nVidia) GeForce GTX 580 (4GB Ram each) So the question is: Is there any way to install 12.10 if you can't even get the Live CD to run (for more than 30 seconds)? My current hardware configuration is both of the GTX 580 cards have an SLI jumper on them, and I have 2 monitors on each card. (Ubuntu info obviously only shows on the main monitor from the failed installation and the attempt at running the Live CD). Perhaps opening the machine back up and removing the SLI Jumper and removing the other 3 monitors (so it only would have 1 video card with one monitor on it) would actually allow me to get 12.10 installed, then I could work on an nVidia Video Driver fix for the GTX 580, and then possibly hook up the other video card and monitors? Or is this something that they are currently aware of and may update with a future release in the next few days/weeks? Any thoughts or suggestions would be greatly appreciated, as I can't even try to fix the issue (assuming it is the nVidia drivers) if I can't even get it to install at all.

    Read the article

  • How to use Windows live contacts api to allow users to import contacts

    - by AlfaTeK
    I can't find a good, simple tutorial on how to achieve this in PHP / Javascript. Basically the same thing FB has where the users specifies their hotmail account and then gives permission to FB to access their contacts. As the API seems to have changed most tutorials simply don't work like: http://livecontactsphp.codeplex.com/ which doesn't get the contacts (but authenticates and gets permission from user)

    Read the article

  • Display On-Screen Television Graphics During Live Broadcasts

    - by ServAce85
    How and with what software do major television companies display on-screen graphics during their programs? For example: On ESPN, how do they produce the news ticker on the bottom (both graphically and from a source code point of view), create and display the scores of sports, and update and show selected stats on the fly? I've looked everywhere I can think of to find the answer, but I am at a loss. That's where you guys (and girls) come in. I hope you can help shed some light on this subject for me. Thanks in advance.

    Read the article

  • Adding a new column to Table which contains live data

    - by Ardman
    I have a large table consisting of over 60 millions records and I would like to add 2 new columns for data migration purposes. There are indexes on the table and some of them are large. So, by me adding the 2 new columns to the table, will I run the risk of slowing down the database whilst it attempts to add them and maybe time-out? Or will it just work? I know that if I try and rearrange the columns SQL Server will ask me to drop and re-create the table, so I definately don't want this. Is this something everyone is challenged with?

    Read the article

  • Workaround for stopping propagation with live()?

    - by bobsoap
    I've run into the problem that has been addressed here without a workaround: I can't use stopPropagation() on dynamically spawned elements. I've tried creating a condition to exclude a click within the dimensions of the spawned element, but that doesn't seem to work at all. Here is what I got: 1) a large background element ("canvas") that is activated to be "sensitive to clicks on it" by a button 2) the canvas, if activated, catches all clicks on it and spawns a small child form ("child") within it 3) the child is positioned relative to the mouse click position. If the mouse click was on the right half of the canvas, the child will be positioned 200 pixels to the left of that spot. (On the right if the click was in the left half) 4) every new click on the canvas removes the existing child (if any) and spawns a new child at the new position (relative to the click) The problem: Since the spawned child element is on top of the canvas, a click on it counts as a click on the canvas. Even if the child is outside of the boundaries of the canvas, clicking on it will trigger the action as described in 4) again . This shouldn't happen. =========== CODE: The button to activate the canvas: $('a#activate').click(function(event){ event.preventDefault(); canvasActive(); }); I referenced the above to show you that the canvas click-catching is happening in a function. Not sure if this is relevant... This is the function that catches clicks on the canvas: function canvasActive() { $('#canvas').click(function(e){ e.preventDefault(); //get click position relative to canvas posClick = { x : Math.round(e.pageX - $(this).offset().left), y : Math.round(e.pageY - $(this).offset().top) }; //calculate child position if(posClick.x <= $canvas.outerWidth(false)/2) { posChild = { x: posClick.x + 200, //if dot is on the left side of canvas y: posClick.y }; } else { posChild = { x: posClick.x - 600, //if dot is on the right y: posClick.y }; } $(this).append(markup); //markup is just the HTML for the child }); } I left out the unimportant stuff. The question is: How can I prevent a click inside of a spawned child from executing the function? I tried getting the child's dimensions and doing something like "if posClick is within this range, don't do anything" - but I can't seem to get it right. Perhaps someone has come across this dilemma before. Any help is appreciated.

    Read the article

  • Recomendations for Creating a Picture Slide Show with Super-Smooth Transitions (For Live Presentaito

    - by Nick
    Hi everyone, I'm doing a theatrical performance, and I need a program that can read images from a folder and display them full screen on one of the computer's VGA outputs, in a predetermined order. All it needs to do is start with the first image, and when a key is pressed (space bar, right arrow), smoothly cross-fade to the next image. Sounds just like power-point right? The only reason why I can use power-point/open-office is because the "fade smoothly" transition isn't smooth enough, or configurable enough. It tends to be fast and choppy, where I would like to see a perfectly smooth fade over, say, 30 seconds. So the question is what is the best (cheap and fast) way to accomplish this? Is there a program that already does this well (for cheap or free)? OR should I try to hack at open-office's transition code? Or would it be easier to create this from scratch? Are there frameworks that might make it easier? I have web programming experience (php), but not desktop or real-time rendering. Any suggestions are appreciated!

    Read the article

  • An Honest look at SharePoint Web Services

    - by juanlarios
    INTRODUCTION If you are a SharePoint developer you know that there are two basic ways to develop against SharePoint. 1) The object Model 2) Web services. SharePoint object model has the advantage of being quite rich. Anything you can do through the SharePoint UI as an administrator or end user, you can do through the object model. In fact everything that is done through the UI is done through the object model behind the scenes. The major disadvantage to getting at SharePoint this way is that the code needs to run on the server. This means that all web parts, event receivers, features, etc… all of this is code that is deployed to the server. The second way to get to SharePoint is through the built in web services. There are many articles on how to manipulate web services, how to authenticate to them and interact with them. The basic idea is that a remote application or process can contact SharePoint through a web service. Lots has been written about how great these web services are. This article is written to document the limitations, some of the issues and frustrations with working with SharePoint built in web services. Ultimately, for the tasks I was given to , SharePoint built in web services did not suffice. My evaluation of SharePoint built in services was compared against creating my own WCF Services to do what I needed. The current project I'm working on right now involved several "integration points". A remote application, installed on a separate server was to contact SharePoint and perform an task or operation. So I decided to start up Visual Studio and built a DLL and basically have 2 layers of logic. An integration layer and a data layer. A good friend of mine pointed me to SOLID principles and referred me to some videos and tutorials about it. I decided to implement the methodology (although a lot of the principles are common sense and I already incorporated in my coding practices). I was to deliver this dll to the application team and they would simply call the methods exposed by this dll and voila! it would do some task or operation in SharePoint. SOLUTION My integration layer implemented an interface that defined some of the basic integration tasks that I was to put together. My data layer was about the same, it implemented an interface with some of the tasks that I was going to develop. This gave me the opportunity to develop different data layers, ultimately different ways to get at SharePoint if I needed to. This is a classic SOLID principle. In this case it proved to be quite helpful because I wrote one data layer completely implementing SharePoint built in Web Services and another implementing my own WCF Service that I wrote. I should mention there is another layer underneath the data layer. In referencing SharePoint or WCF services in my visual studio project I created a class for every web service call. So for example, if I used List.asx. I created a class called "DocumentRetreival" this class would do the grunt work to connect to the correct URL, It would perform the basic operation of contacting the service and so on. If I used a view.asmx, I implemented a class called "ViewRetrieval" with the same idea as the last class but it would now interact with all he operations in view.asmx. This gave my data layer the ability to perform multiple calls without really worrying about some of the grunt work each class performs. This again, is a classic SOLID principle. So, in order to compare them side by side we can look at both data layers and with is involved in each. Lets take a look at the "Create Project" task or operation. The integration point is described as , "dll is to provide a way to create a project in SharePoint". Projects , in this case are basically document libraries. I am to implement a way in which a remote application can create a document library in SharePoint. Easy enough right? Use the list.asmx Web service in SharePoint. So here we go! Lets take a look at the code. I added the List.asmx web service reference to my project and this is the class that contacts it:  class DocumentRetrieval     {         private ListsSoapClient _service;      d   private bool _impersonation;         public DocumentRetrieval(bool impersonation, string endpt)         {             _service = new ListsSoapClient();             this.SetEndPoint(string.Format("{0}/{1}", endpt, ConfigurationManager.AppSettings["List"]));             _impersonation = impersonation;             if (_impersonation)             {                 _service.ClientCredentials.Windows.ClientCredential.Password = ConfigurationManager.AppSettings["password"];                 _service.ClientCredentials.Windows.ClientCredential.UserName = ConfigurationManager.AppSettings["username"];                 _service.ClientCredentials.Windows.AllowedImpersonationLevel =                     System.Security.Principal.TokenImpersonationLevel.Impersonation;             }     private void SetEndPoint(string p)          {             _service.Endpoint.Address = new EndpointAddress(p);          }          /// <summary>         /// Creates a document library with specific name and templateID         /// </summary>         /// <param name="listName">New list name</param>         /// <param name="templateID">Template ID</param>         /// <returns></returns>         public XmlElement CreateLibrary(string listName, int templateID, ref ExceptionContract exContract)         {             XmlDocument sample = new XmlDocument();             XmlElement viewCol = sample.CreateElement("Empty");             try             {                 _service.Open();                 viewCol = _service.AddList(listName, "", templateID);             }             catch (Exception ex)             {                 exContract = new ExceptionContract("DocumentRetrieval/CreateLibrary", ex.GetType(), "Connection Error", ex.StackTrace, ExceptionContract.ExceptionCode.error);                             }finally             {                 _service.Close();             }                                      return viewCol;         } } There was a lot more in this class (that I am not including) because i was reusing the grunt work and making other operations with LIst.asmx, For example, updating content types, changing or configuring lists or document libraries. One of the first things I noticed about working with the built in services is that you are really at the mercy of what is available to you. Before creating a document library (Project) I wanted to expose a IsProjectExisting method. This way the integration or data layer could recognize if a library already exists. Well there is no service call or method available to do that check. So this is what I wrote:   public bool DocLibExists(string listName, ref ExceptionContract exContract)         {             try             {                 var allLists = _service.GetListCollection();                                return allLists.ChildNodes.OfType<XmlElement>().ToList().Exists(x => x.Attributes["Title"].Value ==listName);             }             catch (Exception ex)             {                 exContract = new ExceptionContract("DocumentRetrieval/GetList/GetListWSCall", ex.GetType(), "Unable to Retrieve List Collection", ex.StackTrace, ExceptionContract.ExceptionCode.error);             }             return false;         } This really just gets an XMLElement with all the lists. It was then up to me to sift through the clutter and noise and see if Document library already existed. This took a little bit of getting used to. Now instead of working with code, you are working with XMLElement response format from web service. I wrote a LINQ query to go through and find if the attribute "Title" existed and had a value of the listname then it would return True, if not False. I didn't particularly like working this way. Dealing with XMLElement responses and then having to manipulate it to get at the exact data I was looking for. Once the check for the DocLibExists, was done, I would either create the document library or send back an error indicating the document library already existed. Now lets examine the code that actually creates the document library. It does what you are really after, it creates a document library. Notice how the template ID is really an integer. Every document library template in SharePoint has an ID associated with it. Document libraries, Image Library, Custom List, Project Tasks, etc… they all he a unique integer associated with it. Well, that's great but the client came back to me and gave me some specifics that each "project" or document library, should have. They specified they had 3 types of projects. Each project would have unique views, about 10 views for each project. Each Project specified unique configurations (auditing, versioning, content types, etc…) So what turned out to be a simple implementation of creating a document library as a repository for a project, turned out to be quite involved.  The first thing I thought of was to create a template for document library. There are other ways you can do this too. Using the web Service call, you could configure views, versioning, even content types, etc… the only catch is, you have to be working quite extensively with CAML. I am not fond of CAML. I can do it and work with it, I just don't like doing it. It is quite touchy and at times it is quite tough to understand where errors were made with CAML statements. Working with Web Services and CAML proved to be quite annoying. The service call would return a generic error message that did not particularly point me to a CAML statement syntax error, or even a CAML error. I was not sure if it was a security , performance or code based issue. It was quite tough to work with. At times it was difficult to work with because of the way SharePoint handles metadata. There are "Names", "Display Name", and "StaticName" fields. It was quite tough to understand at times, which one to use. So it took a lot of trial and error. There are tools that can help with CAML generation. There is also now intellisense for CAML statements in Visual Studio that might help but ultimately I'm not fond of CAML with Web Services.   So I decided on the template. So my plan was to create create a document library, configure it accordingly and then use The Template Builder that comes with the SharePoint SDK. This tool allows you to create site templates, list template etc… It is quite interesting because it does not generate an STP file, it actually generates an xml definition and a feature you can activate and make that template available on a site or site collection. The first issue I experienced with this is that one of the specifications to this template was that the "All Documents" view was to have 2 web parts on it. Well, it turns out that using the template builder , it did not include the web parts as part of the list template definition it generated. It backed up the settings, the views, the content types but not the custom web parts. I still decided to try this even without the web parts on the page. This new template defined a new Document library definition with a unique ID. The problem was that the service call accepts an int but it only has access to the built in library int definitions. Any new ones added or created will not be available to create. So this made it impossible for me to approach the problem this way.     I should also mention that one of the nice features about SharePoint is the ability to create list templates, back them up and then create lists based on that template. It can all be done by end user administrators. These templates are quite unique because they are saved as an STP file and not an xml definition. I also went this route and tried to see if there was another service call where I could create a document library based no given template name. Nope! none.      After some thinking I decide to implement a WCF service to do this creation for me. I was quite certain that the object model would allow me to create document libraries base on a template in which an ID was required and also templates saved as STP files. Now I don't want to bother with posting the code to contact WCF service because it's self explanatory, but I will post the code that I used to create a list with custom template. public ServiceResult CreateProject(string name, string templateName, string projectId)         {             string siteurl = SPContext.Current.Site.Url;             Guid webguid = SPContext.Current.Web.ID;                        using (SPSite site = new SPSite(siteurl))             {                 using (SPWeb rootweb = site.RootWeb)                 {                     SPListTemplateCollection temps = site.GetCustomListTemplates(rootweb);                     ProcessWeb(siteurl, webguid, web => Act_CreateProject(web, name, templateName, projectId, temps));                 }//SpWeb             }//SPSite              return _globalResult;                   }         private void Act_CreateProject(SPWeb targetsite, string name, string templateName, string projectId, SPListTemplateCollection temps) {                         var temp = temps.Cast<SPListTemplate>().FirstOrDefault(x => x.Name.Equals(templateName));             if (temp != null)             {                             try                 {                                         Guid listGuid = targetsite.Lists.Add(name, "", temp);                     SPList newList = targetsite.Lists[listGuid];                     _globalResult = new ServiceResult(true, "Success", "Success");                 }                 catch (Exception ex)                 {                     _globalResult = new ServiceResult(false, (string.IsNullOrEmpty(ex.Message) ? "None" : ex.Message + " " + templateName), ex.StackTrace.ToString());                 }                                       }        private void ProcessWeb(string siteurl, Guid webguid, Action<SPWeb> action) {                        using (SPSite sitecollection = new SPSite(siteurl)) {                 using (SPWeb web = sitecollection.AllWebs[webguid]) {                     action(web);                 }                     }                  } This code is actually some of the code I implemented for the service. there was a lot more I did on Project Creation which I will cover in my next blog post. I implemented an ACTION method to process the web. This allowed me to properly dispose the SPWEb and SPSite objects and not rewrite this code over and over again. So I implemented a WCF service to create projects for me, this allowed me to do a lot more than just create a document library with a template, it now gave me the flexibility to do just about anything the client wanted at project creation. Once this was implemented , the client came back to me and said, "we reference all our projects with ID's in our application. we want SharePoint to do the same". This has been something I have been doing for a little while now but I do hope that SharePoint 2010 can have more of an answer to this and address it properly. I have been adding metadata to SPWebs through property bag. I believe I have blogged about it before. This time it required metadata added to a document library. No problem!!! I also mentioned these web parts that were to go on the "All Documents" View. I took the opportunity to configure them to the appropriate settings. There were two settings that needed to be set on these web parts. One of them was a Project ID configured in the webpart properties. The following code enhances and replaces the "Act_CreateProject " method above:  private void Act_CreateProject(SPWeb targetsite, string name, string templateName, string projectId, SPListTemplateCollection temps) {                         var temp = temps.Cast<SPListTemplate>().FirstOrDefault(x => x.Name.Equals(templateName));             if (temp != null)             {                 SPLimitedWebPartManager wpmgr = null;                               try                 {                                         Guid listGuid = targetsite.Lists.Add(name, "", temp);                     SPList newList = targetsite.Lists[listGuid];                     SPFolder rootFolder = newList.RootFolder;                     rootFolder.Properties.Add(KEY, projectId);                     rootFolder.Update();                     if (rootFolder.ParentWeb != targetsite)                         rootFolder.ParentWeb.Dispose();                     if (!templateName.Contains("Natural"))                     {                         SPView alldocumentsview = newList.Views.Cast<SPView>().FirstOrDefault(x => x.Title.Equals(ALLDOCUMENTS));                         SPFile alldocfile = targetsite.GetFile(alldocumentsview.ServerRelativeUrl);                         wpmgr = alldocfile.GetLimitedWebPartManager(PersonalizationScope.Shared);                         ConfigureWebPart(wpmgr, projectId, CUSTOMWPNAME);                                              alldocfile.Update();                     }                                        if (newList.ParentWeb != targetsite)                         newList.ParentWeb.Dispose();                     _globalResult = new ServiceResult(true, "Success", "Success");                 }                 catch (Exception ex)                 {                     _globalResult = new ServiceResult(false, (string.IsNullOrEmpty(ex.Message) ? "None" : ex.Message + " " + templateName), ex.StackTrace.ToString());                 }                 finally                 {                     if (wpmgr != null)                     {                         wpmgr.Web.Dispose();                         wpmgr.Dispose();                     }                 }             }                         }       private void ConfigureWebPart(SPLimitedWebPartManager mgr, string prjId, string webpartname)         {             var wp = mgr.WebParts.Cast<System.Web.UI.WebControls.WebParts.WebPart>().FirstOrDefault(x => x.DisplayTitle.Equals(webpartname));             if (wp != null)             {                           (wp as ListRelationshipWebPart.ListRelationshipWebPart).ProjectID = prjId;                 mgr.SaveChanges(wp);             }         }   This Shows you how I was able to set metadata on the document library. It has to be added to the RootFolder of the document library, Unfortunately, the SPList does not have a Property bag that I can add a key\value pair to. It has to be done on the root folder. Now everything in the integration will reference projects by ID's and will not care about names. My, "DocLibExists" will now need to be changed because a web service is not set up to look at property bags.  I had to write another method on the Service to do the equivalent but with ID's instead of names.  The second thing you will notice about the code is the use of the Webpartmanager. I have seen several examples online, and also read a lot about memory leaks, The above code does not produce memory leaks. The web part manager creates an SPWeb, so just dispose it like I did. CONCLUSION This is a long long post so I will stop here for now, I will continue with more comparisons and limitations in my next post. My conclusion for this example is that Web Services will do the trick if you can suffer through CAML and if you are doing some simple operations. For Everything else, there's WCF! **** fireI apologize for the disorganization of this post, I was on a bus on a 12 hour trip to IOWA while I wrote it, I was half asleep and half awake, hopefully it makes enough sense to someone.

    Read the article

  • Web services, J2EE, Spring, DB integration project ideas - maybe data mining related?

    - by saral jain
    I am a graduate Computer Science student (Data Mining and Machine Learning) and have good exposure to core Java (3 years). I have read up on a bunch of stuff on the following topics: Design patterns, J2EE Web services (SOAP and REST), Spring, and Hibernate Java Concurrency - advanced features like Task and Executors. I would now like to do a project combining this stuff -- over my free time of course -- to get a better understanding of these things and to kind of make an end to end software (to learn the best design principles etc + SVN, maven). Any good project ideas would be really appreciated. I just want to build this stuff to learn, so I don't really mind re-inventing the wheel. Also, anything related to data mining would be an added bonus as it fits with my research but is absolutely not necessary since this project is more to learn to do large scale software development.

    Read the article

  • Web services, Java EE, Spring, DB integration project ideas - maybe data mining related?

    - by saral jain
    I am a graduate Computer Science student (Data Mining and Machine Learning) and have good exposure to core Java (3 years). I have read up on a bunch of stuff on the following topics: Design patterns, Java EE Web services (SOAP and REST), Spring, and Hibernate Java Concurrency - advanced features like Task and Executors. I would now like to do a project combining this stuff -- over my free time of course -- to get a better understanding of these things and to kind of make an end to end software (to learn the best design principles etc + SVN, maven). Any good project ideas would be really appreciated. I just want to build this stuff to learn, so I don't really mind re-inventing the wheel. Also, anything related to data mining would be an added bonus as it fits with my research but is absolutely not necessary since this project is more to learn to do large scale software development.

    Read the article

  • Is it typical for a provider of a web services to also provide client libraries?

    - by HDave
    My company is building a corporate Java web-app and we are leaning towards using GWT-RPC as the client-server protocol for performance reasons. However, in the future, we will need to provide an API for other enterprise systems to access our data as well. For this, we were thinking of a SOAP based web service. In my experience it is common for commercial providers of enterprise web applications to provide client libraries (Java, .NET, C#, etc.). Is this generally the case? I ask because if so, then why bother using SOAP or REST or any standard web services protocol at all? Why not just create a client libraries that communicate via GWT-RPC?

    Read the article

< Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >