Search Results

Search found 62606 results on 2505 pages for 'sql files'.

Page 474/2505 | < Previous Page | 470 471 472 473 474 475 476 477 478 479 480 481  | Next Page >

  • Help With Hard links And Symlinks Moving Directory And Files

    - by Julio
    This is what I would like to do. I have a symlink "/var" linking to "/tmpfs/var.1" /var - /tmpfs/var.1 I start a script called "cache_tmpfs" from /etc/rc.local on startup this script will copy /var.backup/* contents to /tmpfs/var.1/ cp -dpRxf /var.backup/* /tmpfs/var.1/ now the problem is that kernel is opening messages log file in /var/log/messages, is it possible to remove the current /var symlink and recreate a new one (that will symlink to /var.backup insteed of /tmpfs/var.1) without issues as files once opened by system become hard links?? rm /var && ln -s /var.backup /var Thanks...

    Read the article

  • Guidance: How to layout you files for an Ideal Solution

    - by Martin Hinshelwood
    Creating a solution and having it maintainable over time is an art and not a science. I like being pedantic and having a place for everything, no matter how small. For setting up the Areas to run Multiple projects under one solution see my post on  When should I use Areas in TFS instead of Team Projects and for an explanation of branching see Guidance: A Branching strategy for Scrum Teams. Update 17th May 2010 – We are currently trialling running a single Sprint branch to improve our history. Whenever I setup a new Team Project I implement the basic version control structure. I put “readme.txt” files in the folder structure explaining the different levels, and a solution file called “[Client].[Product].sln” located at “$/[Client]/[Product]/DEV/Main” within version control. Developers should add any projects you need to create to that solution in the format “[Client].[Product].[ProductArea].[Assembly]” and they will automatically be picked up and built automatically when you setup Automated Builds using Team Foundation Build. All test projects need to be done using MSTest to get proper IDE and Team Foundation Build integration out-of-the-box and be named for the assembly that it is testing with a naming convention of “[Client].[Product].[ProductArea].[Assembly].Tests” Here is a description of the folder layout; this content should be replicated in readme files under version control in the relevant locations so that even developers new to the project can see how to do it. Figure: The Team Project level - at this level there should be a folder for each the products that you are building if you are using Areas correctly in TFS 2010. You should try very hard to avoided spaces as these things always end up in a URL eventually e.g. "Code Auditor" should be "CodeAuditor". Figure: Product Level - At this level there should be only 3 folders (DEV, RELESE and SAFE) all of which should be in capitals. These folders represent the three stages of your application production line. Each of them may contain multiple branches but this format leaves all of your branches at the same level. Figure: The DEV folder is where all of the Development branches reside. The DEV folder will contain the "Main" branch and all feature branches is they are being used. The DEV designation specifies that all code in every branch under this folder has not been released or made ready for release. And feature branches MUST merge (Forward Integrate) from Main and stabilise prior to merging (Reverse Integration) back down into Main and being decommissioned. Figure: In the Feature branching scenario only merges are allowed onto Main, no development can be done there. Once we have a mature product it is important that new features being developed in parallel are kept separate. This would most likely be used if we had more than one Scrum team working on a single product. Figure: when we are ready to do a release of our software we will create a release branch that is then stabilised prior to deployment. This protects the serviceability of of our released code allowing developers to fix bugs and re-release an existing version. Figure: All bugs found on a release are fixed on the release.  All bugs found in a release are fixed on the release and a new deployment is created. After the deployment is created the bug fixes are then merged (Reverse Integration) into the Main branch. We do this so that we separate out our development from our production ready code.  Figure: SAFE or RTM is a read only record of what you actually released. Labels are not immutable so are useless in this circumstance.  When we have completed stabilisation of the release branch and we are ready to deploy to production we create a read-only copy of the code for reference. In some cases this could be a regulatory concern, but in most cases it protects the company building the product from legal entanglements based on what you did or did not release. Figure: This allows us to reference any particular version of our application that was ever shipped.   In addition I am an advocate of having a single solution with all the Project folders directly under the “Trunk”/”Main” folder and using the full name for the project folders.. Figure: The ideal solution If you must have multiple solutions, because you need to use more than one version of Visual Studio, name the solutions “[Client].[Product][VSVersion].sln” and have it reside in the same folder as the other solution. This makes it easier for Automated build and improves the discoverability of your code and its dependencies. Send me your feedback!   Technorati Tags: VS ALM,VSTS Developing,VS 2010,VS 2008,TFS 2010,TFS 2008,TFBS

    Read the article

  • multi user web game with scheduled processing?

    - by Rooq
    I have an idea for a game which I am in the process of designing, but I am struggling to establish if the way I plan to implement it is possible. The game is a text based sports management simulation. This will require players to take certain actions through a web browser which will interact with a database - adding/updating and selecting. Most of the code required to be executed at this point will be fairly straightforward. The main processing will take place by applications which are scheduled to run on the server at certain times. These apps will process transactions added by the players and also perform some automatic processing based on the game date. My plan was to use an SQL server database (at last count I require about 20 tables) and VB.net for all the coding (coming from a mainframe programming background this language is the simplist for me to get to grips with). I will also need a scheduling tool on the server. Can anyone tell me if what I am planning is feasible before I dive into the actual coding stage of my project?

    Read the article

  • Downloading large files hangs system

    - by Igor
    When Im trying to download large files, i.e. 1gb or more under FireFox, first of all it starts with very big download speed and in few seconds in almost get up to max (~11 MBps). It is downloading very fast, but when downloaded size becomes near 700-800mb and more, my system almost completely hangs, so I can do nothing - I just have to wait until it finishes downloading. Also when it hangs, I can't see the download progress - it looks like it completely hangs. Sometimes, however, if the file size is near 1gb, the system comes back from hang, finishing download, but sometimes I just cant wait before system comes back and have to kill FF from top (it takes me 2 minutes to do this, because of very slow system performance). I use Firefox as primary browser. If I use wget with direct link to file - everything is fine. Speed at max, no performance decrease. So what can I do?

    Read the article

  • How to repair ods files

    - by karthick87
    I have few ods files but all of a sudden it is not opening. When i open the file i get the following error. Please look at the below snapshot Error on opening the file with Archive Manager: karthick@karthick:/media/Datas$ zip -FF data.ods --out repaired_file.ods Fix archive (-FF) - salvage what can Found end record (EOCDR) - says expect single disk archive Scanning for entries... copying: mimetype (46 bytes) copying: Configurations2/statusbar/ (0 bytes) copying: Configurations2/accelerator/current.xml (2 bytes) copying: Configurations2/floater/ (0 bytes) copying: Configurations2/popupmenu/ (0 bytes) copying: Configurations2/progressbar/ (0 bytes) copying: Configurations2/menubar/ (0 bytes) copying: Configurations2/toolbar/ (0 bytes) copying: Configurations2/images/Bitmaps/ (0 bytes) copying: content.xml zip warning: no end of stream entry found: content.xml zip warning: rewinding and scanning for later entries

    Read the article

  • I have 5 days of vouchers for MS training… help me choose? [closed]

    - by Shyatic
    I'm a Microsoft centric guy (systems engineering side) and I already know the syntax of VB, have done VBScript pretty extensively and Excel VBA stuff as well. I want to make the leap into proper programming, probably with C# because it teaches me syntax I can use for Java if I want to go that route at some point. Since I have vouchers for 5 days of programming, and I can understand logic and understand how the .NET framework works... I would love to hear ideas on which MS Courses I should take. My primary focus is to work on web applications with web services that interact and do neat stuff... like for example, to create a 'chat' room or something interactive on the web. Or should I do something with HTML5/JS? I am really not sure... like I said, I want to work to make web services/sites. Not making the next Facebook mind you, but I'd like to work towards something in that spectrum on a much smaller scale. Please give me any advice, I'd like to book these classes asap Obviously getting involved with SQL and things that I will require would be important here.. you guys know better than me! Thanks!

    Read the article

  • Handling changes to data types and entries in a database migration

    - by jandjorgensen
    I'm fully redesigning a site that indexes a number of articles with basic search functionality. The previous site was written about a decade ago, and I'm salvaging about 30,000 entries with data stored in less-than-ideal formats. While I'm moving from MSSQL to MySQL, I don't need to make any "live" changes, so this is not a production-level migration issue so much as a redesign. For instance, dates are stored the same as tags/subjects about the articles, but in strings as "YYYYMMDDd" (the lowercase d stands for "date" in the string). Essentially, before or after I move from the previous database format to a new one, I'm going to need to do a lot of replacement of individual entries. While I understand how to do operations with regular expressions in non-database issues, my database experience isn't robust enough to know the best way to handle this. What is the best (or standard) way to handle major changes like this? Is there an SQL operation I should be looking into? Please let me know if the problem isn't clear--I'm not entirely sure what kind of answer I'm looking for.

    Read the article

  • Tools for assembling textures into DDS files

    - by Nicol Bolas
    There are plenty of tools for making images. I'm not looking for one of those; I have many tools for creating an image. I've got tools for compressing images, generating mipmaps, and even for poking at their basic data format. My issue is with texture assembly. DDS files support cubemaps, array textures, and even cubemap arrays. But I don't know of a tool that can pack a series of images into a cubemap or the like. What tools are available for doing this kind of thing?

    Read the article

  • MP4 files show up but won't stream to xbox 360

    - by Greg
    I set up a basic media server to stream to my 360 using uShare. Here are the instructions I used: http://linuxexpresso.wordpress.com/2011/01/02/howto-ubuntu-upnp-server-to-xbox-360/. I can stream avi files fine but I cannot stream mp4s. When I go to videos on the xbox, I can see all of the videos and folders but when I click play for an mp4 nothing happens. On my ubuntu desktop I can click on the mp4 file and it plays fine. And if I take that file, stick it on a thumb drive and plug it directly into the xbox the mp4 will play off the thumb drive. I'm lost for why it won't work through ushare. Any ideas?

    Read the article

  • How to make Master PDF Editor the default for .pdf files

    - by Hedley Finger
    I want to make Master PDF Editor (MPE) the default for opening PDF files. I right-clicked a PDF file, chose Properties Open With. MPE was not listed. I clicked Show Other Applications but MPE was still not on the list. I tried opening the PDF file with MPE, editing, and then closing it. MPE still did not show up. So how do I make MPE the default program, or at least appear on the Other Programs or Show Other Programs?

    Read the article

  • Recursively rename files - oneliner preferably

    - by zetah
    I found this answer how do i... but it simply doesn't work - it did not rename any file for unknown to me reason Before I started to search around I thought that it should be easy task even for novice penguin, but it doesn't seem so for me. For example, I simply can't tell ls to list all *.txt in all subfolders, which was surprise to me (without grep or similar). Then I found find and find . -name name_1.txt lists files fine, but for f in $(find . -name name_1.txt) ; do echo "$f" ; done splits whole file paths with space as separator, so it's unusable to pass that output to some command like mv or rename I want to ask whats wrong with above command and if possible some nifty oneliner so I can recursively rename name_1.txt to name_2.txt

    Read the article

  • Gnome Activity Journal does not show recently used files

    - by Nik
    I am running ubuntu 10.10, and installed Gnome Activity Journal. However it does not show any recently used files. I have attached a screenshot below. Please note that gnome activity journal has been installed on the system for quite some time. So it is not that I recently installed and it still has to slowly gather data. Also the zeitgeist-daemon is running in the background. Would reinstalling zeitgeist help solve this problem? If yes could you please provide a ppa where I can find the latest stable release of zeitgeist.

    Read the article

  • Unity does not use the categories from the .desktop files

    - by Melissa Newman
    I installed the educational version of Ubuntu with Unity. This is for kids. The most important applications are the ones that the descriptions says are specially added for kids. Trying to find them is a pain in the applications directory. They are organized in the main menu, but Unity does not use the main menu information for anything. Bottom line, I am now going to reinstall Ubuntu and NOT include Unity. The panels feature is nice, but there needs to some ability to organize the applications -- either with a menu or a directory structure that is read. The .desktop files indicate categories ... like education. Why does Unity not use this information?

    Read the article

  • How do you decide what kind of database to use?

    - by Jason Baker
    I really dislike the name "NoSQL", because it isn't very descriptive. It tells me what the databases aren't where I'm more interested in what the databases are. I really think that this category really encompasses several categories of database. I'm just trying to get a general idea of what job each particular database is the best tool for. A few assumptions I'd like to make (and would ask you to make): Assume that you have the capability to hire any number of brilliant engineers who are equally experienced with every database technology that has ever existed. Assume you have the technical infrastructure to support any given database (including available servers and sysadmins who can support said database). Assume that each database has the best support possible for free. Assume you have 100% buy-in from management. Assume you have an infinite amount of money to throw at the problem. Now, I realize that the above assumptions eliminate a lot of valid considerations that are involved in choosing a database, but my focus is on figuring out what database is best for the job on a purely technical level. So, given the above assumptions, the question is: what jobs are each database (including both SQL and NoSQL) the best tool for and why?

    Read the article

  • Tales of a corrupt SQL log

    Warning: Im a simple dev, not an all powerful DBA with godly powers. This morning, one of my sites was down and DNN reported a problem with the database.  A quick series of tests revealed that the culprit was a corrupted log file. Easy fix I said, I have daily backups so its just a mater of restoring a good copy of the database and log files.  Well, I found out thats not exactly true.  You see, for this database, I have daily file backups and these are not database backups created...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • How Mature is Your Database Change Management Process?

    - by Ben Rees
    .dbd-banner p{ font-size:0.75em; padding:0 0 10px; margin:0 } .dbd-banner p span{ color:#675C6D; } .dbd-banner p:last-child{ padding:0; } @media ALL and (max-width:640px){ .dbd-banner{ background:#f0f0f0; padding:5px; color:#333; margin-top: 5px; } } -- Database Delivery Patterns & Practices Further Reading Organization and team processes How do you get your database schema changes live, on to your production system? As your team of developers and DBAs are working on the changes to the database to support your business-critical applications, how do these updates wend their way through from dev environments, possibly to QA, hopefully through pre-production and eventually to production in a controlled, reliable and repeatable way? In this article, I describe a model we use to try and understand the different stages that customers go through as their database change management processes mature, from the very basic and manual, through to advanced continuous delivery practices. I also provide a simple chart that will help you determine “How mature is our database change management process?” This process of managing changes to the database – which all of us who have worked in application/database development have had to deal with in one form or another – is sometimes known as Database Change Management (even if we’ve never used the term ourselves). And it’s a difficult process, often painfully so. Some developers take the approach of “I’ve no idea how my changes get live – I just write the stored procedures and add columns to the tables. It’s someone else’s problem to get this stuff live. I think we’ve got a DBA somewhere who deals with it – I don’t know, I’ve never met him/her”. I know I used to work that way. I worked that way because I assumed that making the updates to production was a trivial task – how hard can it be? Pause the application for half an hour in the middle of the night, copy over the changes to the app and the database, and switch it back on again? Voila! But somehow it never seemed that easy. And it certainly was never that easy for database changes. Why? Because you can’t just overwrite the old database with the new version. Databases have a state – more specifically 4Tb of critical data built up over the last 12 years of running your business, and if your quick hotfix happened to accidentally delete that 4Tb of data, then you’re “Looking for a new role” pretty quickly after the failed release. There are a lot of other reasons why a managed database change management process is important for organisations, besides job security, not least: Frequency of releases. Many business managers are feeling the pressure to get functionality out to their users sooner, quicker and more reliably. The new book (which I highly recommend) Lean Enterprise by Jez Humble, Barry O’Reilly and Joanne Molesky provides a great discussion on how many enterprises are having to move towards a leaner, more frequent release cycle to maintain their competitive advantage. It’s no longer acceptable to release once per year, leaving your customers waiting all year for changes they desperately need (and expect) Auditing and compliance. SOX, HIPAA and other compliance frameworks have demanded that companies implement proper processes for managing changes to their databases, whether managing schema changes, making sure that the data itself is being looked after correctly or other mechanisms that provide an audit trail of changes. We’ve found, at Red Gate that we have a very wide range of customers using every possible form of database change management imaginable. Everything from “Nothing – I just fix the schema on production from my laptop when things go wrong, and write it down in my notebook” to “A full Continuous Delivery process – any change made by a dev gets checked in and recorded, fully tested (including performance tests) before a (tested) release is made available to our Release Management system, ready for live deployment!”. And everything in between of course. Because of the vast number of customers using so many different approaches we found ourselves struggling to keep on top of what everyone was doing – struggling to identify patterns in customers’ behavior. This is useful for us, because we want to try and fit the products we have to different needs – different products are relevant to different customers and we waste everyone’s time (most notably, our customers’) if we’re suggesting products that aren’t appropriate for them. If someone visited a sports store, looking to embark on a new fitness program, and the store assistant suggested the latest $10,000 multi-gym, complete with multiple weights mechanisms, dumb-bells, pull-up bars and so on, then he’s likely to lose that customer. All he needed was a pair of running shoes! To solve this issue – in an attempt to simplify how we understand our customers and our offerings – we built a model. This is a an attempt at trying to classify our customers in to some sort of model or “Customer Maturity Framework” as we rather grandly term it, which somehow simplifies our understanding of what our customers are doing. The great statistician, George Box (amongst other things, the “Box” in the Box-Jenkins time series model) gave us the famous quote: “Essentially all models are wrong, but some are useful” We’ve taken this quote to heart – we know it’s a gross over-simplification of the real world of how users work with complex legacy and new database developments. Almost nobody precisely fits in to one of our categories. But we hope it’s useful and interesting. There are actually a number of similar models that exist for more general application delivery. We’ve found these from ThoughtWorks/Forrester, from InfoQ and others, and initially we tried just taking these models and replacing the word “application” for “database”. However, we hit a problem. From talking to our customers we know that users are far less further down the road of mature database change management than they are for application development. As a simple example, no application developer, who wants to keep his/her job would develop an application for an organisation without source controlling that code. Sure, he/she might not be using an advanced Gitflow branching methodology but they’ll certainly be making sure their code gets managed in a repo somewhere with all the benefits of history, auditing and so on. But this certainly isn’t the case (yet) for the database – a very large segment of the people we speak to have no source control set up for their databases whatsoever, even at the most basic level (for example, keeping change scripts in a source control system somewhere). By the way, if this is you, Red Gate has a great whitepaper here, on the barriers people face getting a source control process implemented at their organisations. This difference in maturity is the same as you move in to areas such as continuous integration (common amongst app developers, relatively rare for database developers) and automated release management (growing amongst app developers, very rare for the database). So, when we created the model we started from scratch and biased the levels of maturity towards what we actually see amongst our customers. But, what are these stages? And what level are you? The table below describes our definitions for four levels of maturity – Baseline, Beginner, Intermediate and Advanced. As I say, this is a model – you won’t fit any of these categories perfectly, but hopefully one will ring true more than others. We’ve also created a PDF with a flow chart to help you find which of these groups most closely matches your team:  Download the Database Delivery Maturity Framework PDF here   Level D1 – Baseline Work directly on live databases Sometimes work directly in production Generate manual scripts for releases. Sometimes use a product like SQL Compare or similar to do this Any tests that we might have are run manually Level D2 – Beginner Have some ad-hoc DB version control such as manually adding upgrade scripts to a version control system Attempt is made to keep production in sync with development environments There is some documentation and planning of manual deployments Some basic automated DB testing in process Level D3 – Intermediate The database is fully version-controlled with a product like Red Gate SQL Source Control or SSDT Database environments are managed Production environment schema is reproducible from the source control system There are some automated tests Have looked at using migration scripts for difficult database refactoring cases Level D4 – Advanced Using continuous integration for database changes Build, testing and deployment of DB changes carried out through a proper database release process Fully automated tests Production system is monitored for fast feedback to developers   Does this model reflect your team at all? Where are you on this journey? We’d be very interested in knowing how you get on. We’re doing a lot of work at the moment, at Red Gate, trying to help people progress through these stages. For example, if you’re currently not source controlling your database, then this is a natural next step. If you are already source controlling your database, what about the next stage – continuous integration and automated release management? To help understand these issues, there’s a summary of the Red Gate Database Delivery learning program on our site, alongside a Patterns and Practices library here on Simple-Talk and a Training Academy section on our documentation site to help you get up and running with the tools you need to progress. All feedback is welcome and it would be great to hear where you find yourself on this journey! This article is part of our database delivery patterns & practices series on Simple Talk. Find more articles for version control, automated testing, continuous integration & deployment.

    Read the article

  • How can I monitor a website for malicious changes to the files

    - by rossmcm
    I had an occasion recently where our website was compromised - a link farm was added to a couple of the pages on one occasion, and on another occasion, a large and nasty aspx file was put on the server. I won't mention the host's name (Hostway), but I was pretty annoyed that someone was able to do this. No, it wasn't a leaky password - around 10 sites hosted by HW with consecutive IP addresses got trashed. Anyway. What I need is a utility or service (preferably free) that takes a snapshot of my websites contents, and then regularly monitors the files (size and datestamp) for unauthorized changes or additions, and alerts me. I've used web services that monitor one file for changes, but I'm looking for something a bit more aggressive.

    Read the article

  • How to Use the File History Feature in Windows 8 to Restore Files

    - by Taylor Gibb
    Jealous of your Mac OS X friends and their great Time Machine feature? Windows 8 has a new feature called File History that works much the same way, giving you an easy method to restore previous versions of your files. We are going to use a networked folder in for our article but you could always skip creating the network folder, and just use a USB drive. To use a USB drive you can just go to the setting for File History and turn it on, it should automatically find your USB and immediately start working. How to Sync Your Media Across Your Entire House with XBMC How to Own Your Own Website (Even If You Can’t Build One) Pt 2 How to Own Your Own Website (Even If You Can’t Build One) Pt 1

    Read the article

  • Files inside Alias folder not accessible

    - by John Isaacks
    In my apache2.conf I have an alias setup like this: Alias /cake/ /var/www-cake/repo <Directory /var/www-cake/repo> Order allow,deny Allow from all AllowOverride All Options +Indexes </Directory> inside the /var/www-cake/repo directory I just have 1 file that is index.php when I go to http://linux-server/cake/ I get a directory listing that shoes the index.php file. When I click on the file it takes me to http://linux-server/cake/index.php in which I get a 404 page not found error. What do I need to do to make the files accessible?

    Read the article

  • host and share files in my hosting

    - by user1314836
    I currently have a domain+hosting with unlimited hosting space for our website. On the other hand, I use Dropbox to share our organizational files and photos between about 10 users. The thing is that sharing photos uses too much space for what a free Dropbox account offers. So I am thinking of taking advantage of my hosting space, but using FTP seems not to be ideal for users who are not too skilled with computers. In addition, it doesn't handle versions in case some user makes a mess of it. And using a public FTP to upload and giving them only download permission doesn't seem a good idea as I am only the CTO. So what I want is basically to implement a local Dropbox for a few users, but I'd prefer something that is not too complex to install/mantain. Thank you a lot.

    Read the article

  • Joining Two MKV files in Ubuntu?

    - by Ryan McClure
    I have an opera that I'm ripping to my computer in MKV format with Handbrake. This opera is on two discs. Is there a way to join the resulting MKV's together? They will have the same bitrate, resolution, etc. If I do this, can I keep chapters from both MKV files organized? And, since I have subtitles in the file (not burnt in), will they still stay intact? I'm not too sure if this question is off-topic or not. If it is, feel more than free to delete it. :)

    Read the article

  • Recover files from a SATA drive from USB

    - by Shane
    I've damaged the drive I had in my windows laptop and now I want to try to recover as many files as possible. I know very little about linux though. I have Ubuntu 10.04. I have a docking station for the drive and it is connected to my linux machine. The drive appears in disk utility. Unfortunately, this is where I have no idea where to proceed. Any help is appreciated and I can provide more info if needed.

    Read the article

  • Backup systems config files

    - by David ???
    I'm planning on installing nVidia proprietary drivers on my Ubuntu 10.10. Historically this always ends-up with me being left with no graphical interface. No ability to revert - and reinstalling the whole system. So now, before trying this anew, I wish to backup all relevant config files. I'll try 1 or 2 methods. I'll list each one's commands. I'll appreciate if anyone can tell me how to backup the relevant file, or what's the reverse of this operation. 10x, David Method I - as described here: apt-get --purge remove xserver-xorg-video-nouveau As described in this answer: edit /etc/default/grub and add the line GRUB_CMDLINE_LINUX="nouveau.modeset=0" sudo update-grub Reboot Install original drivers downloaded from nVidia site. Method II - as described here: sudo apt-get purge nvidia* [possibly 'sudo gedit /etc/modprobe.d/blacklist.conf' adding 'vga16fb' 'nouveau' sudo apt-get install nvidia-glx-185 sudo modprobe nvidia sudo lsmod | grep -i nvidia sudo nvidia-xconfig

    Read the article

  • Managing Files/Folder in Content Repositories or File Systems with Oracle ADF and WebCenter

    - by Shay Shmeltzer
    One more entry in a set of entries (1,2,3) about the capabilities that WebCenter adds to ADF applications. WebCenter is basically the new Portal framework in the Oracle stack - and one key thing that portals do is work with content, allowing you to compose and publish content from files as well as save and store content. In this demo you'll see how using a set of taskflows provided by WebCenter you can add a file management, creation and viewing capabilities to a regular ADF application. To try this out you don't need any fancy content management system - we'll just use your file system for now. All you need is the WebCenter extension installed in JDeveloper, and then you can follow the demo on your own JDeveloper instance. You'll define a connection to your content repository you'll be able to add a bunch of pre-built WebCenter taskflows into your page. And suddenly you can upload/download/create and view document directly from your applicaiton. Check it out:

    Read the article

  • USB-live does not save files between sessions

    - by Mads Skjern
    I created a USB-stick with Ubuntu, using the recommended tool "Startup Disk Creator" and the image for Ubuntu 13.10. The very simple interface looks like this: There can't be much to misunderstand in this GUI. I have chosen to create a USB stick with a live version of Ubuntu, which will save files and settings from session to session, on the USB drive, right? Well, it just doesn't save anything. I go in, create a file on Desktop, restart and it's gone. I did the whole procedure three times, i.e. first creating the USB, then testing if I could save. Have I misunderstood something?

    Read the article

< Previous Page | 470 471 472 473 474 475 476 477 478 479 480 481  | Next Page >