Search Results

Search found 5420 results on 217 pages for 'auxilliary storage'.

Page 50/217 | < Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >

  • How can I set paperclip's storage mechanism based on the current Rails environment?

    - by John Reilly
    I have a rails application that has multiple models with paperclip attachments that are all uploaded to S3. This app also has a large test suite that is run quite often. The downside with this is that a ton of files are uploaded to our S3 account on every test run, making the test suite run slowly. It also slows down development a bit, and requires you to have an internet connection in order to work on the code. Is there a reasonable way to set the paperclip storage mechanism based on the Rails environment? Ideally, our test and development environments would use the local filesystem storage, and the production environment would use S3 storage. I'd also like to extract this logic into a shared module of some kind, since we have several models that will need this behavior. I'd like to avoid a solution like this inside of every model: ### We don't want to do this in our models... if Rails.env.production? has_attached_file :image, :styles => {...}, :storage => :s3, # ...etc... else has_attached_file :image, :styles => {...}, :storage => :filesystem, # ...etc... end Any advice or suggestions would be greatly appreciated! :-)

    Read the article

  • Story of success: MySQL Enterprise Backup (MEB) was successfully integrated with IBM Tivoli Storage Manager (TSM) via System Backup to Tape (SBT) interface.

    - by user13334359
    Since version 3.6 MEB supports backups to tape through the SBT interface.The officially supported tool for such backups to tape is Oracle Secure Backup (OSB).But there are a lot of other Storage Managers. MEB allows to use them through the SBT interface. Since version 3.7 it also has option --sbt-environment which allows to pass environment variables, not needed by OSB, to third-party managers. At the same time MEB can not guarantee it would work with all of them.This month we were contacted by a customer who wanted to use IBM Tivoli Storage Manager (TSM) with MEB. We could only say them same thing I wrote in previous paragraph: this solution is supposed to work, but you have to be pioneers of this technology. And they agreed. They agreed to be the pioneers and so the story begins.MEB requires following options to be specified by those who want to connect it to SBT interface:--sbt-database-name: a name which should be handed over to SBT interface. This can be any name. Default, MySQL, works for most cases, so user is not required to specify this option.--sbt-lib-path: path to SBT library. For TSM this library comes with "Data Protection for Oracle", which, in its turn, interfaces with Oracle Recovery Manager (RMAN), which uses SBT interface. So you need to install it even if you don't use Oracle.--sbt-environment: environment for third-party manager. This option is not needed when you use OSB, but almost always necessary for third-party SBT managers. TSM requires variable TDPO_OPTFILE to be set and point to the TSM configuration file.--backup-image=sbt:: path to the image. Prefix "sbt:" indicates that image should be sent through SBT interfaceSo full command in our case would look like: ./mysqlbackup --port=3307 --protocol=tcp --user=backup_user --password=foobar \ --backup-image=sbt:my-first-backup --sbt-lib-path=/usr/lib/libobk.so \ --sbt-environment="TDPO_OPTFILE=/path/to/my/tdpo.opt" --backup-dir=/path/to/my/dir backup-to-imageAnd this command results in the following output log: MySQL Enterprise Backup version 3.7.1 [2012/02/16] Copyright (c) 2003, 2012, Oracle and/or its affiliates. All Rights Reserved. INFO: Starting with following command line ...  ./mysqlbackup --port=3307 --protocol=tcp --user=backup_user         --password=foobar --backup-image=sbt:my-first-backup         --sbt-lib-path=/usr/lib/libobk.so         --sbt-environment="TDPO_OPTFILE=/path/to/my/tdpo.opt"         --backup-dir=/path/to/my/dir backup-to-image sbt-environment: 'TDPO_OPTFILE=/path/to/my/tdpo.opt' INFO: Got some server configuration information from running server. IMPORTANT: Please check that mysqlbackup run completes successfully.             At the end of a successful 'backup-to-image' run mysqlbackup             prints "mysqlbackup completed OK!". --------------------------------------------------------------------                        Server Repository Options: --------------------------------------------------------------------   datadir                          =  /path/to/data   innodb_data_home_dir             =  /path/to/data   innodb_data_file_path            =  ibdata1:2048M;ibdata2:2048M;ibdata3:64M:autoextend:max:2048M   innodb_log_group_home_dir        =  /path/to/data   innodb_log_files_in_group        =  2   innodb_log_file_size             =  268435456 --------------------------------------------------------------------                        Backup Config Options: --------------------------------------------------------------------   datadir                          =  /path/to/my/dir/datadir   innodb_data_home_dir             =  /path/to/my/dir/datadir   innodb_data_file_path            =  ibdata1:2048M;ibdata2:2048M;ibdata3:64M:autoextend:max:2048M   innodb_log_group_home_dir        =  /path/to/my/dir/datadir   innodb_log_files_in_group        =  2   innodb_log_file_size             =  268435456 Backup Image Path= sbt:my-first-backup mysqlbackup: INFO: Unique generated backup id for this is 13297406400663200 120220 08:54:00 mysqlbackup: INFO: meb_sbt_session_open: MMS is 'Data Protection for Oracle: version 5.5.1.0' 120220 08:54:00 mysqlbackup: INFO: meb_sbt_session_open: MMS version '5.5.1.0' mysqlbackup: INFO: Uses posix_fadvise() for performance optimization. mysqlbackup: INFO: System tablespace file format is Antelope. mysqlbackup: INFO: Found checkpoint at lsn 31668381. mysqlbackup: INFO: Starting log scan from lsn 31668224. 120220  8:54:00 mysqlbackup: INFO: Copying log... 120220  8:54:00 mysqlbackup: INFO: Log copied, lsn 31668381.           We wait 1 second before starting copying the data files... 120220  8:54:01 mysqlbackup: INFO: Copying /path/to/ibdata/ibdata1 (Antelope file format). mysqlbackup: Progress in MB: 200 400 600 800 1000 1200 1400 1600 1800 2000 120220  8:55:30 mysqlbackup: INFO: Copying /path/to/ibdata/ibdata2 (Antelope file format). mysqlbackup: Progress in MB: 200 400 600 800 1000 1200 1400 1600 1800 2000 120220  8:57:18 mysqlbackup: INFO: Copying /path/to/ibdata/ibdata3 (Antelope file format). mysqlbackup: INFO: Preparing to lock tables: Connected to mysqld server. 120220 08:57:22 mysqlbackup: INFO: Starting to lock all the tables.... 120220 08:57:22 mysqlbackup: INFO: All tables are locked and flushed to disk mysqlbackup: INFO: Opening backup source directory '/path/to/data/' 120220 08:57:22 mysqlbackup: INFO: Starting to backup all files in subdirectories of '/path/to/data/' mysqlbackup: INFO: Backing up the database directory 'mysql' mysqlbackup: INFO: Backing up the database directory 'test' mysqlbackup: INFO: Copying innodb data and logs during final stage ... mysqlbackup: INFO: A copied database page was modified at 31668381.           (This is the highest lsn found on page)           Scanned log up to lsn 31670396.           Was able to parse the log up to lsn 31670396.           Maximum page number for a log record 328 120220 08:57:23 mysqlbackup: INFO: All tables unlocked mysqlbackup: INFO: All MySQL tables were locked for 0.000 seconds 120220 08:59:01 mysqlbackup: INFO: meb_sbt_backup_close: blocks: 4162  size: 1048576  bytes: 4363985063 120220  8:59:01 mysqlbackup: INFO: Full backup completed! mysqlbackup: INFO: MySQL binlog position: filename bin_mysql.001453, position 2105 mysqlbackup: WARNING: backup-image already closed mysqlbackup: INFO: Backup image created successfully.:            Image Path: 'sbt:my-first-backup' -------------------------------------------------------------    Parameters Summary -------------------------------------------------------------    Start LSN                  : 31668224    End LSN                    : 31670396 ------------------------------------------------------------- mysqlbackup completed OK!Backup successfully completed.To restore it you should use same commands like you do for any other MEB image, but need to provide sbt* options as well: $./mysqlbackup --backup-image=sbt:my-first-backup --sbt-lib-path=/usr/lib/libobk.so \ --sbt-environment="TDPO_OPTFILE=/path/to/my/tdpo.opt" --backup-dir=/path/to/my/dir image-to-backup-dirThen apply log as usual: $./mysqlbackup --backup-dir=/path/to/my/dir apply-logThen stop mysqld and finally copy-back: $./mysqlbackup --defaults-file=path/to/my.cnf --backup-dir=/path/to/my/dir copy-back  Disclaimer. This is only story of one success which can be useful for someone else. MEB is not regularly tested and not guaranteed to work with IBM TSM or any other third-party storage manager.

    Read the article

  • How to turn a folder into USB drive / mass storage?

    - by FernandoSBS
    I have a plasma TV with USB input (can play divx and etc) but what I would like to do is use a software to turn a folder in my notebook HD (windows 7) into a USB Mass Storage device, so that I can connect the TV to the PC using a USB cable so that the TV recognize the PC folder as a flash drive / usb mass storage. is seems MAC has something like what I need: http://support.apple.com/kb/ht1661 So there must be a similar to windows!

    Read the article

  • More Great Improvements to the Windows Azure Management Portal

    - by ScottGu
    Over the last 3 weeks we’ve released a number of enhancements to the new Windows Azure Management Portal.  These new capabilities include: Localization Support for 6 languages Operation Log Support Support for SQL Database Metrics Virtual Machine Enhancements (quick create Windows + Linux VMs) Web Site Enhancements (support for creating sites in all regions, private github repo deployment) Cloud Service Improvements (deploy from storage account, configuration support of dedicated cache) Media Service Enhancements (upload, encode, publish, stream all from within the portal) Virtual Networking Usability Enhancements Custom CNAME support with Storage Accounts All of these improvements are now live in production and available to start using immediately.  Below are more details on them: Localization Support The Windows Azure Portal now supports 6 languages – English, German, Spanish, French, Italian and Japanese. You can easily switch between languages by clicking on the Avatar bar on the top right corner of the Portal: Selecting a different language will automatically refresh the UI within the portal in the selected language: Operation Log Support The Windows Azure Portal now supports the ability for administrators to review the “operation logs” of the services they manage – making it easy to see exactly what management operations were performed on them.  You can query for these by selecting the “Settings” tab within the Portal and then choosing the “Operation Logs” tab within it.  This displays a filter UI that enables you to query for operations by date and time: As of the most recent release we now show logs for all operations performed on Cloud Services and Storage Accounts.  You can click on any operation in the list and click the “Details” button in the command bar to retrieve detailed status about it.  This now makes it possible to retrieve details about every management operation performed. In future updates you’ll see us extend the operation log capability to apply to all Windows Azure Services – which will enable great post-mortem and audit support. Support for SQL Database Metrics You can now monitor the number of successful connections, failed connections and deadlocks in your SQL databases using the new “Dashboard” view provided on each SQL Database resource: Additionally, if the database is added as a “linked resource” to a Web Site or Cloud Service, monitoring metrics for the linked SQL database are shown along with the Web Site or Cloud Service metrics in the dashboard. This helps with viewing and managing aggregated information across both resources in your application. Enhancements to Virtual Machines The most recent Windows Azure Portal release brings with it some nice usability improvements to Virtual Machines: Integrated Quick Create experience for Windows and Linux VMs Creating a new Windows or Linux VM is now easy using the new “Quick Create” experience in the Portal: In addition to Windows VM templates you can also now select Linux image templates in the quick create UI: This makes it incredibly easy to create a new Virtual Machine in only a few seconds. Enhancements to Web Sites Prior to this past month’s release, users were forced to choose a single geographical region when creating their first site.  After that, subsequent sites could only be created in that same region.  This restriction has now been removed, and you can now create sites in any region at any time and have up to 10 free sites in each supported region: One of the new regions we’ve recently opened up is the “East Asia” region.  This allows you to now deploy sites to North America, Europe and Asia simultaneously.  Private GitHub Repository Support This past week we also enabled Git based continuous deployment support for Web Sites from private GitHub and BitBucket repositories (previous to this you could only enable this with public repositories).  Enhancements to Cloud Services Experience The most recent Windows Azure Portal release brings with it some nice usability improvements to Cloud Services: Deploy a Cloud Service from a Windows Azure Storage Account The Windows Azure Portal now supports deploying an application package and configuration file stored in a blob container in Windows Azure Storage. The ability to upload an application package from storage is available when you custom create, or upload to, or update a cloud service deployment. To upload an application package and configuration, create a Cloud Service, then select the file upload dialog, and choose to upload from a Windows Azure Storage Account: To upload an application package from storage, click the “FROM STORAGE” button and select the application package and configuration file to use from the new blob storage explorer in the portal. Configure Windows Azure Caching in a caching enabled cloud service If you have deployed the new dedicated cache within a cloud service role, you can also now configure the cache settings in the portal by navigating to the configuration tab of for your Cloud Service deployment. The configuration experience is similar to the one in Visual Studio when you create a cloud service and add a caching role.  The portal now allows you to add or remove named caches and change the settings for the named caches – all from within the Portal and without needing to redeploy your application. Enhancements to Media Services You can now upload, encode, publish, and play your video content directly from within the Windows Azure Portal.  This makes it incredibly easy to get started with Windows Azure Media Services and perform common tasks without having to write any code. Simply navigate to your media service and then click on the “Content” tab.  All of the media content within your media service account will be listed here: Clicking the “upload” button within the portal now allows you to upload a media file directly from your computer: This will cause the video file you chose from your local file-system to be uploaded into Windows Azure.  Once uploaded, you can select the file within the content tab of the Portal and click the “Encode” button to transcode it into different streaming formats: The portal includes a number of pre-set encoding formats that you can easily convert media content into: Once you select an encoding and click the ok button, Windows Azure Media Services will kick off an encoding job that will happen in the cloud (no need for you to stand-up or configure a custom encoding server).  When it’s finished, you can select the video in the “Content” tab and then click PUBLISH in the command bar to setup an origin streaming end-point to it: Once the media file is published you can point apps against the public URL and play the content using Windows Azure Media Services – no need to setup or run your own streaming server.  You can also now select the file and click the “Play” button in the command bar to play it using the streaming endpoint directly within the Portal: This makes it incredibly easy to try out and use Windows Azure Media Services and test out an end-to-end workflow without having to write any code.  Once you test things out you can of course automate it using script or code – providing you with an incredibly powerful Cloud Media platform that you can use. Enhancements to Virtual Network Experience Over the last few months, we have received feedback on the complexity of the Virtual Network creation experience. With these most recent Portal updates, we have added a Quick Create experience that makes the creation experience very simple. All that an administrator now needs to do is to provide a VNET name, choose an address space and the size of the VNET address space. They no longer need to understand the intricacies of the CIDR format or walk through a 4-page wizard or create a VNET / subnet. This makes creating virtual networks really simple: The portal also now has a “Register DNS Server” task that makes it easy to register DNS servers and associate them with a virtual network. Enhancements to Storage Experience The portal now lets you register custom domain names for your Windows Azure Storage Accounts.  To enable this, select a storage resource and then go to the CONFIGURE tab for a storage account, and then click MANAGE DOMAIN on the command bar: Clicking “Manage Domain” will bring up a dialog that allows you to register any CNAME you want: Summary The above features are all now live in production and available to use immediately.  If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using them today.  Visit the Windows Azure Developer Center to learn more about how to build apps with it. One of the other cool features that is now live within the portal is our new Windows Azure Store – which makes it incredibly easy to try and purchase developer services from a variety of partners.  It is an incredibly awesome new capability – and something I’ll be doing a dedicated post about shortly. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • How to grant standard users access to disk partitions and flash storage?

    - by JK04
    I have a partition that I need standard users (not administrators)to have read/write access to. However, this partition does not even appear to them as it does to me as an administrator. How can I make it so that standard users can read/write to this partition? It would be nice if they could have the ability to mount it if needed. I have the same problem with removable media - if I have a flash card in the computer, the standard users cannot see this storage media.

    Read the article

  • Does the direction of storage make us bad data citizens?

    - by simonsabin
      My career started at a company where we hardly had email, the network was a 10base2 affair with cables running all around the office. You used floppy disks and the thought of a GB of data was absurd. You had to look after every byte and only keep what you really needed. Whilst the cost of the spinning disks gradually falls the cost and size of flash storage continues to plummet. The new Crucial SSD is £380 for 1TB I can now keep 128GB of data on a SD card the size of my finger. It only costs...(read more)

    Read the article

  • Bacula windows client could not connect to Bacula director

    - by pr0f-r00t
    I have a Bacula server on my Linux Debian squeeze host (Bacula version 5.0.2) and a Bacula client on Windows XP SP3. On my network each client can see each other, can share files and can ping. On my local server I could run bconsole and the server responds but when I run bconsole or bat on my windows client the server does not respond. Here are my configuration files: bacula-dir.conf: # # Default Bacula Director Configuration file # # The only thing that MUST be changed is to add one or more # file or directory names in the Include directive of the # FileSet resource. # # For Bacula release 5.0.2 (28 April 2010) -- debian squeeze/sid # # You might also want to change the default email address # from root to your address. See the "mail" and "operator" # directives in the Messages resource. # Director { # define myself Name = nima-desktop-dir DIRport = 9101 # where we listen for UA connections QueryFile = "/etc/bacula/scripts/query.sql" WorkingDirectory = "/var/lib/bacula" PidDirectory = "/var/run/bacula" Maximum Concurrent Jobs = 1 Password = "Cv70F6pf1t6pBopT4vQOnigDrR0v3L" # Console password Messages = Daemon DirAddress = 127.0.0.1 # DirAddress = 72.16.208.1 } JobDefs { Name = "DefaultJob" Type = Backup Level = Incremental Client = nima-desktop-fd FileSet = "Full Set" Schedule = "WeeklyCycle" Storage = File Messages = Standard Pool = File Priority = 10 Write Bootstrap = "/var/lib/bacula/%c.bsr" } # # Define the main nightly save backup job # By default, this job will back up to disk in /nonexistant/path/to/file/archive/dir Job { Name = "BackupClient1" JobDefs = "DefaultJob" } #Job { # Name = "BackupClient2" # Client = nima-desktop2-fd # JobDefs = "DefaultJob" #} # Backup the catalog database (after the nightly save) Job { Name = "BackupCatalog" JobDefs = "DefaultJob" Level = Full FileSet="Catalog" Schedule = "WeeklyCycleAfterBackup" # This creates an ASCII copy of the catalog # Arguments to make_catalog_backup.pl are: # make_catalog_backup.pl <catalog-name> RunBeforeJob = "/etc/bacula/scripts/make_catalog_backup.pl MyCatalog" # This deletes the copy of the catalog RunAfterJob = "/etc/bacula/scripts/delete_catalog_backup" Write Bootstrap = "/var/lib/bacula/%n.bsr" Priority = 11 # run after main backup } # # Standard Restore template, to be changed by Console program # Only one such job is needed for all Jobs/Clients/Storage ... # Job { Name = "RestoreFiles" Type = Restore Client=nima-desktop-fd FileSet="Full Set" Storage = File Pool = Default Messages = Standard Where = /nonexistant/path/to/file/archive/dir/bacula-restores } # job for vmware windows host Job { Name = "nimaxp-fd" Type = Backup Client = nimaxp-fd FileSet = "nimaxp-fs" Schedule = "WeeklyCycle" Storage = File Messages = Standard Pool = Default Write Bootstrap = "/var/bacula/working/rsys-win-www-1-fd.bsr" #Change this } # job for vmware windows host Job { Name = "arg-michael-fd" Type = Backup Client = nimaxp-fd FileSet = "arg-michael-fs" Schedule = "WeeklyCycle" Storage = File Messages = Standard Pool = Default Write Bootstrap = "/var/bacula/working/rsys-win-www-1-fd.bsr" #Change this } # List of files to be backed up FileSet { Name = "Full Set" Include { Options { signature = MD5 } # # Put your list of files here, preceded by 'File =', one per line # or include an external list with: # # File = <file-name # # Note: / backs up everything on the root partition. # if you have other partitions such as /usr or /home # you will probably want to add them too. # # By default this is defined to point to the Bacula binary # directory to give a reasonable FileSet to backup to # disk storage during initial testing. # File = /usr/sbin } # # If you backup the root directory, the following two excluded # files can be useful # Exclude { File = /var/lib/bacula File = /nonexistant/path/to/file/archive/dir File = /proc File = /tmp File = /.journal File = /.fsck } } # List of files to be backed up FileSet { Name = "nimaxp-fs" Enable VSS = yes Include { Options { signature = MD5 } File = "C:\softwares" File = C:/softwares File = "C:/softwares" } } # List of files to be backed up FileSet { Name = "arg-michael-fs" Enable VSS = yes Include { Options { signature = MD5 } File = "C:\softwares" File = C:/softwares File = "C:/softwares" } } # # When to do the backups, full backup on first sunday of the month, # differential (i.e. incremental since full) every other sunday, # and incremental backups other days Schedule { Name = "WeeklyCycle" Run = Full 1st sun at 23:05 Run = Differential 2nd-5th sun at 23:05 Run = Incremental mon-sat at 23:05 } # This schedule does the catalog. It starts after the WeeklyCycle Schedule { Name = "WeeklyCycleAfterBackup" Run = Full sun-sat at 23:10 } # This is the backup of the catalog FileSet { Name = "Catalog" Include { Options { signature = MD5 } File = "/var/lib/bacula/bacula.sql" } } # Client (File Services) to backup Client { Name = nima-desktop-fd Address = localhost FDPort = 9102 Catalog = MyCatalog Password = "_MOfxEuRzxijc0DIMcBqtyx9iW1tzE7V6" # password for FileDaemon File Retention = 30 days # 30 days Job Retention = 6 months # six months AutoPrune = yes # Prune expired Jobs/Files } # Client file service for vmware windows host Client { Name = nimaxp-fd Address = nimaxp FDPort = 9102 Catalog = MyCatalog Password = "Ku8F1YAhDz5EMUQjiC9CcSw95Aho9XbXailUmjOaAXJP" # password for FileDaemon File Retention = 30 days # 30 days Job Retention = 6 months # six months AutoPrune = yes # Prune expired Jobs/Files } # Client file service for vmware windows host Client { Name = arg-michael-fd Address = 192.168.0.61 FDPort = 9102 Catalog = MyCatalog Password = "b4E9FU6s/9Zm4BVFFnbXVKhlyd/zWxj0oWITKK6CALR/" # password for FileDaemon File Retention = 30 days # 30 days Job Retention = 6 months # six months AutoPrune = yes # Prune expired Jobs/Files } # # Second Client (File Services) to backup # You should change Name, Address, and Password before using # #Client { # Name = nima-desktop2-fd # Address = localhost2 # FDPort = 9102 # Catalog = MyCatalog # Password = "_MOfxEuRzxijc0DIMcBqtyx9iW1tzE7V62" # password for FileDaemon 2 # File Retention = 30 days # 30 days # Job Retention = 6 months # six months # AutoPrune = yes # Prune expired Jobs/Files #} # Definition of file storage device Storage { Name = File # Do not use "localhost" here Address = localhost # N.B. Use a fully qualified name here SDPort = 9103 Password = "Cj-gtxugC4dAymY01VTSlUgMTT5LFMHf9" Device = FileStorage Media Type = File } # Definition of DDS tape storage device #Storage { # Name = DDS-4 # Do not use "localhost" here # Address = localhost # N.B. Use a fully qualified name here # SDPort = 9103 # Password = "Cj-gtxugC4dAymY01VTSlUgMTT5LFMHf9" # password for Storage daemon # Device = DDS-4 # must be same as Device in Storage daemon # Media Type = DDS-4 # must be same as MediaType in Storage daemon # Autochanger = yes # enable for autochanger device #} # Definition of 8mm tape storage device #Storage { # Name = "8mmDrive" # Do not use "localhost" here # Address = localhost # N.B. Use a fully qualified name here # SDPort = 9103 # Password = "Cj-gtxugC4dAymY01VTSlUgMTT5LFMHf9" # Device = "Exabyte 8mm" # MediaType = "8mm" #} # Definition of DVD storage device #Storage { # Name = "DVD" # Do not use "localhost" here # Address = localhost # N.B. Use a fully qualified name here # SDPort = 9103 # Password = "Cj-gtxugC4dAymY01VTSlUgMTT5LFMHf9" # Device = "DVD Writer" # MediaType = "DVD" #} # Generic catalog service Catalog { Name = MyCatalog # Uncomment the following line if you want the dbi driver # dbdriver = "dbi:sqlite3"; dbaddress = 127.0.0.1; dbport = dbname = "bacula"; dbuser = ""; dbpassword = "" } # Reasonable message delivery -- send most everything to email address # and to the console Messages { Name = Standard # # NOTE! If you send to two email or more email addresses, you will need # to replace the %r in the from field (-f part) with a single valid # email address in both the mailcommand and the operatorcommand. # What this does is, it sets the email address that emails would display # in the FROM field, which is by default the same email as they're being # sent to. However, if you send email to more than one address, then # you'll have to set the FROM address manually, to a single address. # for example, a '[email protected]', is better since that tends to # tell (most) people that its coming from an automated source. # mailcommand = "/usr/lib/bacula/bsmtp -h localhost -f \"\(Bacula\) \<%r\>\" -s \"Bacula: %t %e of %c %l\" %r" operatorcommand = "/usr/lib/bacula/bsmtp -h localhost -f \"\(Bacula\) \<%r\>\" -s \"Bacula: Intervention needed for %j\" %r" mail = root@localhost = all, !skipped operator = root@localhost = mount console = all, !skipped, !saved # # WARNING! the following will create a file that you must cycle from # time to time as it will grow indefinitely. However, it will # also keep all your messages if they scroll off the console. # append = "/var/lib/bacula/log" = all, !skipped catalog = all } # # Message delivery for daemon messages (no job). Messages { Name = Daemon mailcommand = "/usr/lib/bacula/bsmtp -h localhost -f \"\(Bacula\) \<%r\>\" -s \"Bacula daemon message\" %r" mail = root@localhost = all, !skipped console = all, !skipped, !saved append = "/var/lib/bacula/log" = all, !skipped } # Default pool definition Pool { Name = Default Pool Type = Backup Recycle = yes # Bacula can automatically recycle Volumes AutoPrune = yes # Prune expired volumes Volume Retention = 365 days # one year } # File Pool definition Pool { Name = File Pool Type = Backup Recycle = yes # Bacula can automatically recycle Volumes AutoPrune = yes # Prune expired volumes Volume Retention = 365 days # one year Maximum Volume Bytes = 50G # Limit Volume size to something reasonable Maximum Volumes = 100 # Limit number of Volumes in Pool } # Scratch pool definition Pool { Name = Scratch Pool Type = Backup } # # Restricted console used by tray-monitor to get the status of the director # Console { Name = nima-desktop-mon Password = "-T0h6HCXWYNy0wWqOomysMvRGflQ_TA6c" CommandACL = status, .status } bacula-fd.conf on client: # # Default Bacula File Daemon Configuration file # # For Bacula release 5.0.3 (08/05/10) -- Windows MinGW32 # # There is not much to change here except perhaps the # File daemon Name # # # "Global" File daemon configuration specifications # FileDaemon { # this is me Name = nimaxp-fd FDport = 9102 # where we listen for the director WorkingDirectory = "C:\\Program Files\\Bacula\\working" Pid Directory = "C:\\Program Files\\Bacula\\working" # Plugin Directory = "C:\\Program Files\\Bacula\\plugins" Maximum Concurrent Jobs = 10 } # # List Directors who are permitted to contact this File daemon # Director { Name = Nima-desktop-dir Password = "Cv70F6pf1t6pBopT4vQOnigDrR0v3L" } # # Restricted Director, used by tray-monitor to get the # status of the file daemon # Director { Name = nimaxp-mon Password = "q5b5g+LkzDXorMViFwOn1/TUnjUyDlg+gRTBp236GrU3" Monitor = yes } # Send all messages except skipped files back to Director Messages { Name = Standard director = Nima-desktop = all, !skipped, !restored } I have checked my firewall and disabled the firewall but it doesn't work.

    Read the article

  • Blob container creation exception ...

    - by Egon
    I get an exception every time I try to create a container for the blob using the following code blobStorageType = storageAccInfo.CreateCloudBlobClient(); ContBlob = blobStorageType.GetContainerReference(containerName); //everything fine till here ; next line creates an exception ContBlob.CreateIfNotExist(); Microsoft.WindowsAzure.StorageClient.StorageClientException was unhandled Message="One of the request inputs is out of range." Source="Microsoft.WindowsAzure.StorageClient" StackTrace: at Microsoft.WindowsAzure.StorageClient.Tasks.Task1.get_Result() at Microsoft.WindowsAzure.StorageClient.Tasks.Task1.ExecuteAndWait() at Microsoft.WindowsAzure.StorageClient.TaskImplHelper.ExecuteImplWithRetry[T](Func2 impl, RetryPolicy policy) at Microsoft.WindowsAzure.StorageClient.CloudBlobContainer.CreateIfNotExist(BlobRequestOptions options) at Microsoft.WindowsAzure.StorageClient.CloudBlobContainer.CreateIfNotExist() at WebRole1.BlobFun..ctor() in C:\Users\cloud\Documents\Visual Studio 2008\Projects\CloudBlob\WebRole1\BlobFun.cs:line 58 at WebRole1.BlobFun.calling1() in C:\Users\cloud\Documents\Visual Studio 2008\Projects\CloudBlob\WebRole1\BlobFun.cs:line 29 at AzureBlobTester.Program.Main(String[] args) in C:\Users\cloud\Documents\Visual Studio 2008\Projects\CloudBlob\AzureBlobTester\Program.cs:line 19 at System.AppDomain._nExecuteAssembly(Assembly assembly, String[] args) at System.AppDomain.ExecuteAssembly(String assemblyFile, Evidence assemblySecurity, String[] args) at Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssembly() at System.Threading.ThreadHelper.ThreadStart_Context(Object state) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ThreadHelper.ThreadStart() InnerException: System.Net.WebException Message="The remote server returned an error: (400) Bad Request." Source="System" StackTrace: at System.Net.HttpWebRequest.EndGetResponse(IAsyncResult asyncResult) at Microsoft.WindowsAzure.StorageClient.EventHelper.ProcessWebResponse(WebRequest req, IAsyncResult asyncResult, EventHandler1 handler, Object sender) InnerException: Do you guys knw what is it that I am doing wrong ?

    Read the article

  • Alternative to Amazon’s S3 service?

    - by Cory
    Just wondering if there is good alternative to Amazon's S3 service? I like S3 but the bandwidth cost is high. I looked at CouldFiles from Rackspace but the cost is even higher. I don't mind prepaying or having monthly payment in order to reduce the bandwidth cost greatly. Thank you for any help

    Read the article

  • accessing a blob ; without using a webrole ?

    - by Egon
    I wanted to knw if there is way we can upload /download a blob; add remove view metadata without using a webrole ? If my application has a lot of gui, shud there be multiple webroles ? everywhere I see webrole's file default.aspx.cs has everything to do with the blob based on a event ; which is perfectly fine, but what if my gui is more complicated ?

    Read the article

  • No, iCloud Isn’t Backing Them All Up: How to Manage Photos on Your iPhone or iPad

    - by Chris Hoffman
    Are the photos you take with your iPhone or iPad backed up in case you lose your device? If you’re just relying on iCloud to manage your important memories, your photos may not be backed up at all. Apple’s iCloud has a photo-syncing feature in the form of “Photo Stream,” but Photo Stream doesn’t actually perform any long-term backups of your photos. iCloud’s Photo Backup Limitations Assuming you’ve set up iCloud on your iPhone or iPad, your device is using a feature called “Photo Stream” to automatically upload the photos you take to your iCloud storage and sync them across your devices. Unfortunately, there are some big limitations here. 1000 Photos: Photo Stream only backs up the latest 1000 photos. Do you have 1500 photos in your Camera Roll folder on your phone? If so, only the latest 1000 photos are stored in your iCloud account online. If you don’t have those photos backed up elsewhere, you’ll lose them when you lose your phone. If you have 1000 photos and take one more, the oldest photo will be removed from your iCloud Photo Stream. 30 Days: Apple also states that photos in your Photo Stream will be automatically deleted after 30 days “to give your devices plenty of time to connect and download them.” Some people report photos aren’t deleted after 30 days, but it’s clear you shouldn’t rely on iCloud for more than 30 days of storage. iCloud Storage Limits: Apple only gives you 5 GB of iCloud storage space for free, and this is shared between backups, documents, and all other iCloud data. This 5 GB can fill up pretty quickly. If your iCloud storage is full and you haven’t purchased any more storage more from Apple, your photos aren’t being backed up. Videos Aren’t Included: Photo Stream doesn’t include videos, so any videos you take aren’t automatically backed up. It’s clear that iCloud’s Photo Stream isn’t designed as a long-term way to store your photos, just a convenient way to access recent photos on all your devices before you back them up for real. iCloud’s Photo Stream is Designed for Desktop Backups If you have a Mac, you can launch iPhoto and enable the Automatic Import option under Photo Stream in its preferences pane. Assuming your Mac is on and connected to the Internet, iPhoto will automatically download photos from your photo stream and make local backups of them on your hard drive. You’ll then have to back up your photos manually so you don’t lose them if your Mac’s hard drive ever fails. If you have a Windows PC, you can install the iCloud Control Panel, which will create a Photo Stream folder on your PC. Your photos will be automatically downloaded to this folder and stored in it. You’ll want to back up your photos so you don’t lose them if your PC’s hard drive ever fails. Photo Stream is clearly designed to be used along with a desktop application. Photo Stream temporarily backs up your photos to iCloud so iPhoto or iCloud Control Panel can download them to your Mac or PC and make a local backup before they’re deleted. You could also use iTunes to sync your photos from your device to your PC or Mac, but we don’t really recommend it — you should never have to use iTunes. How to Actually Back Up All Your Photos Online So Photo Stream is actually pretty inconvenient — or, at least, it’s just a way to temporarily sync photos between your devices without storing them long-term. But what if you actually want to automatically back up your photos online without them being deleted automatically? The solution here is a third-party app that does this for you, offering the automatic photo uploads with long-term storage. There are several good services with apps in the App Store: Dropbox: Dropbox’s Camera Upload feature allows you to automatically upload the photos — and videos — you take to your Dropbox account. They’ll be easily accessible anywhere there’s a Dropbox app and you can get much more free Dropbox storage than you can iCloud storage. Dropbox will never automatically delete your old photos. Google+: Google+ offers photo and video backups with its Auto Upload feature, too. Photos will be stored in your Google+ Photos — formerly Picasa Web Albums — and will be marked as private by default so no one else can view them. Full-size photos will count against your free 15 GB of Google account storage space, but you can also choose to upload an unlimited amount of photos at a smaller resolution. Flickr: The Flickr app is no longer a mess. Flickr offers an Auto Upload feature for uploading full-size photos you take and free Flickr accounts offer a massive 1 TB of storage for you to store your photos. The massive amount of free storage alone makes Flickr worth a look. Use any of these services and you’ll get an online, automatic photo backup solution you can rely on. You’ll get a good chunk of free space, your photos will never be automatically deleted, and you can easily access them from any device. You won’t have to worry about storing local copies of your photos and backing them up manually. Apple should fix this mess and offer a better solution for long-term photo backup, especially considering the limitations aren’t immediately obvious to users. Until they do, third-party apps are ready to step in and take their place. You can also automatically back up your photos to the web on Android with Google+’s Auto Upload or Dropbox’s Camera Upload. Image Credit: Simon Yeo on Flickr     

    Read the article

  • Different Azure blob streams when using .Net client vs. REST interface

    - by knightpfhor
    I have encountered an unusual difference in the way that the .Net client for Azure and the direct REST API bring back streams of binary data. If I use the CloundBlob.DownloadToStream() vs. getting the response stream from the HTTP response, I get streams with the same length, but different content. Specifically the REST response seems to 0 out a series of bytes. I've discovered this issue because I'm trying to use the byte range feature for blobs which is currently not supported in the .Net client (if I'm wrong on this point and someone can point at where I can do this it might make the rest of this question irrelevant). If I upload a binary representation of the first 2k unicode characters with this code: Public Sub WriteFoo() Dim Blob As CloudBlob Dim Stream1 As MemoryStream Dim Container As CloudBlobContainer Dim Builder As StringBuilder Dim NextCharacter As String Dim Formatter As BinaryFormatter Container = CloudStorageAccount.DevelopmentStorageAccount.CreateCloudBlobClient.GetContainerReference("testcontainer") Container.CreateIfNotExist() Blob = Container.GetBlobReference("Foo") Stream1 = New MemoryStream() Builder = New Text.StringBuilder() For Index As Integer = 1 To 2000 Select Case Index Case Is <= 9 NextCharacter = ChrW(9) Case Is <= 31 NextCharacter = Environment.NewLine Case 127 NextCharacter = Environment.NewLine Case Else NextCharacter = ChrW(Index) End Select Builder.Append(NextCharacter) Next Formatter = New BinaryFormatter() Formatter.Serialize(Stream1, Builder.ToString()) Stream1.Position = 0 Blob.UploadFromStream(Stream1) End Sub Then try to access it with the following code: Public Sub ReadFoo() Dim Blob As CloudBlob Dim Request As System.Net.HttpWebRequest Dim Response As System.Net.WebResponse Dim ResponseSize As Integer Dim ResponseBuffer As Byte() Dim ResponseStream As Stream Dim Stream1 As MemoryStream Dim Stream2 As MemoryStream Dim Container As CloudBlobContainer Dim Byte1 As Integer Dim Byte2 As Integer Container = CloudStorageAccount.DevelopmentStorageAccount.CreateCloudBlobClient.GetContainerReference("testcontainer") Container.CreateIfNotExist() Blob = Container.GetBlobReference("Foo") Stream1 = New MemoryStream() Stream2 = New MemoryStream() Blob.DownloadToStream(Stream1) Request = DirectCast(System.Net.WebRequest.Create(Blob.Uri), System.Net.HttpWebRequest) Request.Headers.Add("x-ms-version", "2009-09-19") Request.Headers.Add("x-ms-range", String.Format("bytes={0}-{1}", 0, Integer.MaxValue)) Blob.Container.ServiceClient.Credentials.SignRequest(Request) Response = Request.GetResponse() ResponseStream = Response.GetResponseStream() ResponseSize = CInt(Response.ContentLength) ReDim ResponseBuffer(ResponseSize - 1) ResponseStream.Read(ResponseBuffer, 0, ResponseSize) Stream2.Write(ResponseBuffer, 0, ResponseSize) Stream1.Position = 0 Stream2.Position = 0 If Stream1.Length <> Stream2.Length Then System.Diagnostics.Debug.WriteLine(String.Format("Streams a different length. 1: {0}. 2: {1}", Stream1.Length, Stream2.Length)) Else While Stream1.Position < Stream1.Length Byte1 = Stream1.ReadByte() Byte2 = Stream2.ReadByte() If Byte1 <> Byte2 Then System.Diagnostics.Debug.WriteLine(String.Format("Streams differ at position {0}, 1: {1}. 2: {2}", Stream1.Position - 1, Byte1, Byte2)) End If End While End If End Sub Past all certain point all of the data in Stream2 (the data I've retrieved from the REST api) ends up being 0. To make matters even more confusing, when I reverse the order that I put the characters in the string e.g. For Index As Integer = 2000 To 1 rather than For Index As Integer = 1To 2000 it all works OK. Any help is much appreciated. My computer is sick of me swearing at it.

    Read the article

  • Objective-C classes, pointers to primitive types, etc.

    - by Toby Wilson
    I'll cut a really long story short and give an example of my problem. Given a class that has a pointer to a primitive type as a property: @interface ClassOne : NSObject { int* aNumber } @property int* aNumber; The class is instantiated, and aNumber is allocated and assigned a value, accordingly: ClassOne* bob = [[ClassOne alloc] init]; bob.aNumber = malloc(sizeof(int)); *bob.aNumber = 5; It is then passed, by reference, to assign the aNumber value of a seperate instance of this type of class, accordingly: ClassOne* fred = [[ClassOne alloc] init]; fred.aNumber = bob.aNumber; Fred's aNumber pointer is then freed, reallocated, and assigned a new value, for example 7. Now, the problem I'm having; Since Fred has been assigned the same pointer that Bob had, I would expect that Bob's aNumber will now have a value of 7. It doesn't, because for some reason it's pointer was freed, but not reassigned (it is still pointing to the same address it was first allocated which is now freed). Fred's pointer, however, has the allocated value 7 in a different memory location. Why is it behaving like this? What am I minsunderstanding? How can I make it work like C++ does?

    Read the article

  • Book Review - Programming Windows Azure by Siriram Krishnan

    - by BuckWoody
    As part of my professional development, I’ve created a list of books to read throughout the year, starting in June of 2011. This a review of the first one, called Programming Windows Azure by Siriram Krishnan. You can find my entire list of books I’m reading for my career here: http://blogs.msdn.com/b/buckwoody/archive/2011/06/07/head-in-the-clouds-eyes-on-the-books.aspx  Why I Chose This Book: As part of my learning style, I try to read multiple books about a single subject. I’ve found that at least 3 books are necessary to get the right amount of information to me. This is a “technical” work, meaning that it deals with technology and not business, writing or other facets of my career. I’ll have a mix of all of those as I read along. I chose this work in addition to others I’ve read since it covers everything from an introduction to more advanced topics in a single book. It also has some practical examples of actually working with the product, particularly on storage. Although it’s dated, many examples normally translate. I also saw that it had pretty good reviews. What I learned: I learned a great deal about storage, and many useful code snippets. I do think that there could have been more of a focus on the application fabric - but of course that wasn’t as mature a feature when this book was written. I learned some great architecture examples, and in one section I learned more about encryption. In that example, however, I would rather have seen the examples go the other way - the book focused on moving data from on-premise to Azure storage in an encrypted fashion. Using the Application Fabric I would rather see sensitive data left in a hybrid fashion on premise, and connect to for the Azure application. Even so, the examples were very useful. If you’re looking for a good “starter” Azure book, this is a good choice. I also recommend the last chapter as a quick read for a DBA, or Database Administrator. It’s not very long, but useful. Note that the limits described are incorrect - which is one of the dangers of reading a book about any cloud offering. The services offered are updated so quickly that the information is in constant danger of being “stale”. Even so, I found this a useful book, which I believe will help me work with Azure better. Raw Notes: I take notes as I read, calling that process “reading with a pencil”. I find that when I do that I pay attention better, and record some things that I need to know later. I’ll take these notes, categorize them into a OneNote notebook that I synchronize in my Live.com account, and that way I can search them from anywhere. I can even read them on the web, since the Live.com has a OneNote program built in. Note that these are the raw notes, so they might not make a lot of sense out of context - I include them here so you can watch my though process. Programming Windows Azure by Siriram Krishnan: Learning about how to select applications suitable for Distributed Technology. Application Fabric gets the least attention; probably because it was newer at the time. Very clear (Chapter One) Good foundation Background and history, but not too much I normally arrange my descriptions differently, starting with the use-cases and moving to physicality, but this difference helps me. Interesting that I am reading this using Safari Books Online, which uses many of these concepts. Taught me some new aspects of a Hypervisor – very low-level information about the Azure Fabric (not to be confused with the Application Fabric feature) (Chapter Two) Good detail of what is included in the SDK. Even more is available now. CS = Cloud Service (Chapter 3) Place Storage info in the configuration file, since it can be streamed in-line with a running app. Ditto for logging, and keep separated configs for staging and testing. Easy-switch in and switch out.  (Chapter 4) There are two Runtime API’s, one of external and one for internal. Realizing how powerful this paradigm really is. Some places seem light, and to drop off but perhaps that’s best. Managing API is not charged, which is nice. I don’t often think about the price, until it comes to an actual deployment (Chapter 5) Csmanage is something I want to dig into deeper. API requires package moves to Blob storage first, so it needs a URL. Csmanage equivalent can be written in Unix scripting using openssl. Upgrades are possible, and you use the upgradeDomainCount attribute in the Service-Definition.csdef file  Always use a low-privileged account to test on the dev fabric, since Windows Azure runs in partial trust. Full trust is available, but can be dangerous and must be well-thought out. (Chapter 6) Learned how to run full CMD commands in a web window – not that you would ever do that, but it was an interesting view into those links. This leads to a discussion on hosting other runtimes (such as Java or PHP) in Windows Azure. I got an expanded view on this process, although this is where the book shows its age a little. Books can be a problem for Cloud Computing for this reason – things just change too quickly. Windows Azure storage is not eventually consistent – it is instantly consistent with multi-phase commit. Plumbing for this is internal, not required to code that. (Chapter 7) REST API makes the service interoperable, hybrid, and consistent across code architectures. Nicely done. Use affinity groups to keep data and code together. Side note: e-book readers need a common “notes” feature. There’s a decent quick description of REST in this chapter. Learned about CloudDrive code – PowerShell sample that mounts Blob storage as a local provider. Works against Dev fabric by default, can be switched to Account. Good treatment in the storage chapters on the differences between using Dev storage and Azure storage. These can be mitigated. No, blobs are not of any size or number. Not a good statement (Chapter 8) Blob storage is probably Azure’s closest play to Infrastructure as a Service (Iaas). Blob change operations must be authenticated, even when public. Chapters on storage are pretty in-depth. Queue Messages are base-64 encoded (Chapter 9) The visibility timeout ensures processing of message in a disconnected system. Order is not guaranteed for a message, so if you need that set an increasing number in the queue mechanism. While Queues are accessible via REST, they are not public and are secured by default. Interesting – the header for a queue request includes an estimated count. This can be useful to create more worker roles in a dynamic system. Each Entity (row) in the Azure Table service is atomic – all or nothing. (Chapter 10) An entity can have up to 255 Properties  Use “ID” for the class to indicate the key value, or use the [DataServiceKey] Attribute.  LINQ makes working with the Azure Table Service much easier, although Interop is certainly possible. Good description on the process of selecting the Partition and Row Key.  When checking for continuation tokens for pagination, include logic that falls out of the check in case you are at the last page.  On deleting a storage object, it is instantly unavailable, however a background process is dispatched to perform the physical deletion. So if you want to re-create a storage object with the same name, add retry logic into the code. Interesting approach to deleting an index entity without having to read it first – create a local entity with the same keys and apply it to the Azure system regardless of change-state.  Although the “Indexes” description is a little vague, it’s interesting to see a Folding and Stemming discussion a-la the Porter Stemming Algorithm. (Chapter 11)  Presents a better discussion of indexes (at least inverted indexes) later in the chapter. Great treatment for DBA’s in Chapter 11. We need to work on getting secondary indexes in Table storage. There is a limited form of transactions called “Entity Group Transactions” that, although they have conditions, makes a transactional system more possible. Concurrency also becomes an issue, but is handled well if you’re using Data Services in .NET. It watches the Etag and allows you to take action appropriately. I do not recommend using Azure as a location for secure backups. In fact, I would rather have seen the examples in (Chapter 12) go the other way, showing how data could be brought back to a local store as a DR or HA strategy. Good information on cryptography and so on even so. Chapter seems out of place, and should be combined with the Blob chapter.  (Chapter 13) on SQL Azure is dated, although the base concepts are OK.  Nice example of simple ADO.NET access to a SQL Azure (or any SQL Server Really) database.  

    Read the article

  • Problems Mapping a List of Serializable Objets with JDO

    - by Sergio del Amo
    I have two classes Invoice and InvoiceItem. I would like Invoice to have a List of InvoiceItem Objets. I have red that the list must be of primitive or serializable objects. I have made InvoiceItem Serializable. Invoice.java looks like import java.util.ArrayList; import java.util.Date; import java.util.List; import javax.jdo.annotations.Column; import javax.jdo.annotations.Embedded; import javax.jdo.annotations.EmbeddedOnly; import javax.jdo.annotations.IdGeneratorStrategy; import javax.jdo.annotations.IdentityType; import javax.jdo.annotations.PersistenceCapable; import javax.jdo.annotations.Persistent; import javax.jdo.annotations.Element; import javax.jdo.annotations.PrimaryKey; import com.google.appengine.api.datastore.Key; import com.softamo.pelicamo.shared.InvoiceCompanyDTO; @PersistenceCapable(identityType = IdentityType.APPLICATION) public class Invoice { @PrimaryKey @Persistent(valueStrategy = IdGeneratorStrategy.IDENTITY) private Long id; @Persistent private String number; @Persistent private Date date; @Persistent private List<InvoiceItem> items = new ArrayList<InvoiceItem>(); public Invoice() {} public Long getId() { return id; } public void setId(Long id) {this.id = id;} public String getNumber() { return number;} public void setNumber(String invoiceNumber) { this.number = invoiceNumber;} public Date getDate() { return date;} public void setDate(Date invoiceDate) { this.date = invoiceDate;} public List<InvoiceItem> getItems() { return items;} public void setItems(List<InvoiceItem> items) { this.items = items;} } and InvoiceItem.java looks like import java.io.Serializable; import java.math.BigDecimal; import javax.jdo.annotations.PersistenceCapable; import javax.jdo.annotations.Persistent; @PersistenceCapable public class InvoiceItem implements Serializable { @Persistent private BigDecimal amount; @Persistent private float quantity; public InvoiceItem() {} public BigDecimal getAmount() { return amount;} public void setAmount(BigDecimal amount) { this.amount = amount;} public float getQuantity() { return quantity;} public void setQuantity(float quantity) { this.quantity = quantity;} } I get the next error while running a JUnit test. javax.jdo.JDOUserException: Attempt to handle persistence for object using datastore-identity yet StoreManager for this datastore doesn't support that identity type at org.datanucleus.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:375) at org.datanucleus.jdo.JDOPersistenceManager.jdoMakePersistent(JDOPersistenceManager.java:674) at org.datanucleus.jdo.JDOPersistenceManager.makePersistent(JDOPersistenceManager.java:694) at com.softamo.pelicamo.server.InvoiceStore.add(InvoiceStore.java:23) at com.softamo.pelicamo.server.PopulateStorage.storeInvoices(PopulateStorage.java:58) at com.softamo.pelicamo.server.PopulateStorage.run(PopulateStorage.java:46) at com.softamo.pelicamo.server.InvoiceStoreTest.setUp(InvoiceStoreTest.java:44) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:27) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:76) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184) at org.junit.runners.ParentRunner.run(ParentRunner.java:236) at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:46) at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197) NestedThrowablesStackTrace: Attempt to handle persistence for object using datastore-identity yet StoreManager for this datastore doesn't support that identity type org.datanucleus.exceptions.NucleusUserException: Attempt to handle persistence for object using datastore-identity yet StoreManager for this datastore doesn't support that identity type at org.datanucleus.state.AbstractStateManager.<init>(AbstractStateManager.java:128) at org.datanucleus.state.JDOStateManagerImpl.<init>(JDOStateManagerImpl.java:215) at org.datanucleus.jdo.JDOAdapter.newStateManager(JDOAdapter.java:119) at org.datanucleus.state.StateManagerFactory.newStateManagerForPersistentNew(StateManagerFactory.java:150) at org.datanucleus.ObjectManagerImpl.persistObjectInternal(ObjectManagerImpl.java:1297) at org.datanucleus.sco.SCOUtils.validateObjectForWriting(SCOUtils.java:1476) at org.datanucleus.store.mapped.scostore.ElementContainerStore.validateElementForWriting(ElementContainerStore.java:380) at org.datanucleus.store.mapped.scostore.FKListStore.validateElementForWriting(FKListStore.java:609) at org.datanucleus.store.mapped.scostore.FKListStore.internalAdd(FKListStore.java:344) at org.datanucleus.store.appengine.DatastoreFKListStore.internalAdd(DatastoreFKListStore.java:146) at org.datanucleus.store.mapped.scostore.AbstractListStore.addAll(AbstractListStore.java:128) at org.datanucleus.store.mapped.mapping.CollectionMapping.postInsert(CollectionMapping.java:157) at org.datanucleus.store.appengine.DatastoreRelationFieldManager.runPostInsertMappingCallbacks(DatastoreRelationFieldManager.java:216) at org.datanucleus.store.appengine.DatastoreRelationFieldManager.access$200(DatastoreRelationFieldManager.java:47) at org.datanucleus.store.appengine.DatastoreRelationFieldManager$1.apply(DatastoreRelationFieldManager.java:115) at org.datanucleus.store.appengine.DatastoreRelationFieldManager.storeRelations(DatastoreRelationFieldManager.java:80) at org.datanucleus.store.appengine.DatastoreFieldManager.storeRelations(DatastoreFieldManager.java:955) at org.datanucleus.store.appengine.DatastorePersistenceHandler.storeRelations(DatastorePersistenceHandler.java:527) at org.datanucleus.store.appengine.DatastorePersistenceHandler.insertPostProcess(DatastorePersistenceHandler.java:299) at org.datanucleus.store.appengine.DatastorePersistenceHandler.insertObjects(DatastorePersistenceHandler.java:251) at org.datanucleus.store.appengine.DatastorePersistenceHandler.insertObject(DatastorePersistenceHandler.java:235) at org.datanucleus.state.JDOStateManagerImpl.internalMakePersistent(JDOStateManagerImpl.java:3185) at org.datanucleus.state.JDOStateManagerImpl.makePersistent(JDOStateManagerImpl.java:3161) at org.datanucleus.ObjectManagerImpl.persistObjectInternal(ObjectManagerImpl.java:1298) at org.datanucleus.ObjectManagerImpl.persistObject(ObjectManagerImpl.java:1175) at org.datanucleus.jdo.JDOPersistenceManager.jdoMakePersistent(JDOPersistenceManager.java:669) at org.datanucleus.jdo.JDOPersistenceManager.makePersistent(JDOPersistenceManager.java:694) at com.softamo.pelicamo.server.InvoiceStore.add(InvoiceStore.java:23) at com.softamo.pelicamo.server.PopulateStorage.storeInvoices(PopulateStorage.java:58) at com.softamo.pelicamo.server.PopulateStorage.run(PopulateStorage.java:46) at com.softamo.pelicamo.server.InvoiceStoreTest.setUp(InvoiceStoreTest.java:44) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:27) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:76) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184) at org.junit.runners.ParentRunner.run(ParentRunner.java:236) at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:46) at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197) Moreover, when I try to store an invoice with a list of items through my app. In the development console I can see that items are not persisted to any field while the rest of the invoice class properties are stored properly. Does anyone know what I am doing wrong? Solution As pointed in the answers, the error says that the InvoiceItem class was missing a primaryKey. I tried with: @PrimaryKey @Persistent(valueStrategy = IdGeneratorStrategy.IDENTITY) private Long id; But I was getting javax.jdo.JDOFatalUserException: Error in meta-data for InvoiceItem.id: Cannot have a java.lang.Long primary key and be a child object (owning field is Invoice.items). In persist list of objets, @aldrin pointed that For child classes the primary key has to be a com.google.appengine.api.datastore.Key value (or encoded as a string) see So, I tried with Key. It worked. @PrimaryKey @Persistent(valueStrategy = IdGeneratorStrategy.IDENTITY) private Key id;

    Read the article

  • How does cobol store and retrieve data?

    - by controlfreak123
    I'm starting to learn about COBOL. I have some experience writing programs that deal with sql databases and I guess I'm confused how cobol stores and retrieves data that is stored in a mainframe for example. I know that it's not like relational databases but every example program I've seen takes data straight from the command line and I know thats not how real world COBOL programs process the data. Can someone explain or show me a good resource that can explain it?

    Read the article

  • Core Data vs. SQLitePersistentObjects

    - by Macatomy
    I'm creating an iPhone app and I'm trying to choose between 2 solutions for a persistent store. Core Data, or SQLitePersistentObjects. Basically, all my app needs is a way to store an array of model objects and then load them again to display in a UITableView. Its nothing too complicated. Core Data seems to have a much higher learning curve than the simple to use SQLitePersistentObjects. Are there any obvious benefits of using Core Data over SQLitePersistentObjects in my case?

    Read the article

  • Rails Action and Fragment Caching during Development?

    - by viatropos
    I'm just starting to get into Rails caching and am wondering how to test whether or not caching is working in my development environment. I have set these two config variables for both the development (temporarily) and production environments: config.action_controller.perform_caching = true config.action_controller.page_cache_directory = File.join(RAILS_ROOT, 'public', 'cache') And my controller basically looks like this (using the resource_controller gem): class EventsController < Spree::BaseController resource_controller caches_page :index index.response do |format| format.html format.xml { render :xml => @collection.to_xml } end # ... end If I do that, I get these files: public/cache/events.html public/cache/events.xml But when I change caches_page to caches_action, I don't see any generated files and am not sure how to tell if the response has been cached. What do I need to know to know that this is working? I've read the docs and related, the rails caching guides, the railscast, and a few other docs, but I'm still wondering where everything is stored. Thanks!

    Read the article

  • ANY way to consolidate this code?

    - by JM4
    I am building a PHP registration form which takes the following fields for up to 20 athletes: First Name Middle Initial Last Name Federation Number Address City State Zip DOB SSN Phone Email I am only through 7 of the fields for each fighter and my php file is very large (over 40kb). Is there ANY way to consolidate this code at all? I am also having to validate the information on each field (as I said - 20 athletes x 12 fields = 240 validations on a single page). If I can send any further code let me know! <form id="Form" action="<?php $_SERVER['PHP_SELF']; ?>" method="post" name="Form" onsubmit="return Enroll_Form_Validator(this)"> <p class="title">Your Fighters' Information</p> <p>Please complete the following fields with your <span style="color:red;"> Fighters' Information</span> to continue your enrollment.</p> <br /> <?php // if $errors is not empty, the form must have failed one or more validation // tests. Loop through each and display them on the page for the user if (!empty($errors)) { echo "<div class='error'>Please fix the following errors:\n<ul>"; foreach ($errors as $error) echo "<li>$error</li>\n"; echo "</ul></div>"; } ?> <?php if ($_SESSION['Num_Fighters'] > "0") { ?> <table class="demoTable"> <tr> <td>First Name: </td> <td><input type="text" name="F1FirstName" onkeyup="if(!this.value.match(/^([a-z]+\s?)*$/i))this.value=this.value.replace(/[^a-z ]/ig,'').replace(/\s+/g,' ')" value="<?php echo $fields['F1FirstName']; ?>" /></td> </tr> <tr> <td>Middle Initial: </td> <td><input type="text" name="F1MI" size="2" maxlength="1" onkeyup="if(!this.value.match(/^([a-z]+\s?)*$/i))this.value=this.value.replace(/[^a-z ]/ig,'').replace(/\s+/g,' ')" value="<?php echo $fields['F1MI']; ?>" /></td> </tr> <tr> <td>Last Name: </td> <td><input type="text" name="F1LastName" onkeyup="if(!this.value.match(/^([a-z]+\s?)*$/i))this.value=this.value.replace(/[^a-z ]/ig,'').replace(/\s+/g,' ')" value="<?php echo $fields['F1LastName']; ?>" /></td> </tr> <tr> <td>Federation No: </td> <td><input type="text" name="F1FedNum" maxlength="10" onkeyup="this.value=this.value.replace(/[^0-9]/ig, '')" value="<?php echo $fields['F1FedNum']; ?>" /></td> </tr> <tr> <td>SSN: </td> <td><input type="text" name="F1SSN1" size="3" maxlength="3" onkeyup="this.value=this.value.replace(/[^0-9]/ig, '')" value="<?php echo $fields['F1SSN1']; ?>" /> - <input type="text" name="F1SSN2" size="2" maxlength="2" onkeyup="this.value=this.value.replace(/[^0-9]/ig, '')" value="<?php echo $fields['F1SSN2']; ?>" /> - <input type="text" name="F1SSN3" size="4" maxlength="4" onkeyup="this.value=this.value.replace(/[^0-9]/ig, '')" value="<?php echo $fields['F1SSN3']; ?>" /> </td> </tr> <tr> <td>Date of Birth</td> <td> <select name="F1DOB1"> <option value="">Month</option> <?php for ($i=1; $i<=12; $i++) { echo "<option value='$i'"; if ($fields["F1DOB1"] == $i) echo " selected"; echo ">$i</option>"; } ?> </select> / <select name="F1DOB2"> <option value="">Day</option> <?php for ($i=1; $i<=31; $i++) { echo "<option value='$i'"; if ($fields["F1DOB2"] == $i) echo " selected"; echo ">$i</option>"; } ?> </select> / <select name="F1DOB3"> <option value="">Year</option> <?php for ($i=date('Y'); $i>=1900; $i--) { echo "<option value='$i'"; if ($fields["F1DOB3"] == $i) echo " selected"; echo ">$i</option>"; } ?> </select> </td> </tr> <tr> <td>Address: </td> <td><input type="text" name="F1Address" value="<?php echo $fields['F1Address']; ?>" /></td> </tr> <tr> <td>City: </td> <td><input type="text" name="F1City" onkeyup="if(!this.value.match(/^([a-z]+\s?)*$/i))this.value=this.value.replace(/[^a-z ]/ig,'').replace(/\s+/g,' ')" value="<?php echo $fields['F1City']; ?>" /></td> </tr> <tr> <td>State: </td> <td><select name="F1State"><option value="">Choose a State</option><?php showOptionsDrop($states_arr, null, true); ?></select></td> </tr> <tr> <td>Zip Code: </td> <td><input type="text" name="F1Zip" size="6" maxlength="5" onkeyup="this.value=this.value.replace(/[^0-9]/ig, '')" value="<?php echo $fields['F1Zip']; ?>" /></td> </tr> <tr> <td>Contact Telephone No: </td> <td>( <input type="text" name="F1Phone1" size="3" maxlength="3" onkeyup="this.value=this.value.replace(/[^0-9]/ig, '')" value="<?php echo $fields['F1Phone1']; ?>" /> ) <input type="text" name="F1Phone2" size="3" maxlength="3" onkeyup="this.value=this.value.replace(/[^0-9]/ig, '')" value="<?php echo $fields['F1Phone2']; ?>" /> - <input type="text" name="F1Phone3" size="4" maxlength="4" onkeyup="this.value=this.value.replace(/[^0-9]/ig, '')" value="<?php echo $fields['F1Phone3']; ?>" /> </td> </tr> <tr> <td>Email:</td> <td><input type="text" name="F1Email" value="<?php echo $fields['F1Email']; ?>" /></td> </tr> </table> <?php } ?> <br /> <?php if ($_SESSION['Num_Fighters'] > "1") { ?> <table class="demoTable"> <tr> <td>First Name: </td> <td><input type="text" name="F2FirstName" onkeyup="if(!this.value.match(/^([a-z]+\s?)*$/i))this.value=this.value.replace(/[^a-z ]/ig,'').replace(/\s+/g,' ')" value="<?php echo $fields['F2FirstName']; ?>" /></td> </tr> <tr> <td>Middle Initial: </td> <td><input type="text" name="F2MI" size="2" maxlength="1" onkeyup="if(!this.value.match(/^([a-z]+\s?)*$/i))this.value=this.value.replace(/[^a-z ]/ig,'').replace(/\s+/g,' ')" value="<?php echo $fields['F2MI']; ?>" /></td> </tr> <tr> <td>Last Name: </td> <td><input type="text" name="F2LastName" onkeyup="if(!this.value.match(/^([a-z]+\s?)*$/i))this.value=this.value.replace(/[^a-z ]/ig,'').replace(/\s+/g,' ')" value="<?php echo $fields['F2LastName']; ?>" /></td> </tr> <tr> <td>Federation No: </td> <td><input type="text" name="F2FedNum" maxlength="10" onkeyup="this.value=this.value.replace(/[^0-9]/ig, '')" value="<?php echo $fields['F2FedNum']; ?>" /></td> </tr> <tr> <td>SSN: </td> <td><input type="text" name="F2SSN1" size="3" maxlength="3" onkeyup="this.value=this.value.replace(/[^0-9]/ig, '')" value="<?php echo $fields['F2SSN1']; ?>" /> - <input type="text" name="F2SSN2" size="2" maxlength="2" onkeyup="this.value=this.value.replace(/[^0-9]/ig, '')" value="<?php echo $fields['F2SSN2']; ?>" /> - <input type="text" name="F2SSN3" size="4" maxlength="4" onkeyup="this.value=this.value.replace(/[^0-9]/ig, '')" value="<?php echo $fields['F2SSN3']; ?>" /> </td> </tr> <tr> <td>Date of Birth</td> <td> <select name="F2DOB1"> <option value="">Month</option> <?php for ($i=1; $i<=12; $i++) { echo "<option value='$i'"; if ($fields["F2DOB1"] == $i) echo " selected"; echo ">$i</option>"; } ?> </select> / <select name="F2DOB2"> <option value="">Day</option> <?php for ($i=1; $i<=31; $i++) { echo "<option value='$i'"; if ($fields["F2DOB2"] == $i) echo " selected"; echo ">$i</option>"; } ?> </select> / <select name="F2DOB3"> <option value="">Year</option> <?php for ($i=date('Y'); $i>=1900; $i--) { echo "<option value='$i'"; if ($fields["F2DOB3"] == $i) echo " selected"; echo ">$i</option>"; } ?> </select> </td> </tr> <tr> <td>Address: </td> <td><input type="text" name="F2Address" value="<?php echo $fields['F2Address']; ?>" /></td> </tr> <tr> <td>City: </td> <td><input type="text" name="F2City" onkeyup="if(!this.value.match(/^([a-z]+\s?)*$/i))this.value=this.value.replace(/[^a-z ]/ig,'').replace(/\s+/g,' ')" value="<?php echo $fields['F2City']; ?>" /></td> </tr> <tr> <td>State: </td> <td><select name="F2State"><option value="">Choose a State</option><?php showOptionsDrop($states_arr, null, true); ?></select></td> </tr> <tr> <td>Zip Code: </td> <td><input type="text" name="F2Zip" size="6" maxlength="5" onkeyup="this.value=this.value.replace(/[^0-9]/ig, '')" value="<?php echo $fields['F2Zip']; ?>" /></td> </tr> <tr> <td>Contact Telephone No: </td> <td>( <input type="text" name="F2Phone1" size="3" maxlength="3" onkeyup="this.value=this.value.replace(/[^0-9]/ig, '')" value="<?php echo $fields['F2Phone1']; ?>" /> ) <input type="text" name="F2Phone2" size="3" maxlength="3" onkeyup="this.value=this.value.replace(/[^0-9]/ig, '')" value="<?php echo $fields['F2Phone2']; ?>" /> - <input type="text" name="F2Phone3" size="4" maxlength="4" onkeyup="this.value=this.value.replace(/[^0-9]/ig, '')" value="<?php echo $fields['F2Phone3']; ?>" /> </td> </tr> <tr> <td>Email:</td> <td><input type="text" name="F2Email" value="<?php echo $fields['F2Email']; ?>" /></td> </tr> </table> <?php } ?> <br /> <?php if ($_SESSION['Num_Fighters'] > "2") { ?> <table class="demoTable"> <tr> <td>First Name: </td> <td><input type="text" name="F3FirstName" onkeyup="if(!this.value.match(/^([a-z]+\s?)*$/i))this.value=this.value.replace(/[^a-z ]/ig,'').replace(/\s+/g,' ')" value="<?php echo $fields['F3FirstName']; ?>" /></td> </tr> <tr> <td>Middle Initial: </td> <td><input type="text" name="F3MI" size="2" maxlength="1" onkeyup="if(!this.value.match(/^([a-z]+\s?)*$/i))this.value=this.value.replace(/[^a-z ]/ig,'').replace(/\s+/g,' ')" value="<?php echo $fields['F3MI']; ?>" /></td> </tr> <tr> <td>Last Name: </td> <td><input type="text" name="F3LastName" onkeyup="if(!this.value.match(/^([a-z]+\s?)*$/i))this.value=this.value.replace(/[^a-z ]/ig,'').replace(/\s+/g,' ')" value="<?php echo $fields['F3LastName']; ?>" /></td> </tr> <tr> <td>Federation No: </td> <td><input type="text" name="F3FedNum" maxlength="10" onkeyup="this.value=this.value.replace(/[^0-9]/ig, '')" value="<?php echo $fields['F3FedNum']; ?>" /></td> </tr> <tr> <td>SSN: </td> <td><input type="text" name="F3SSN1" size="3" maxlength="3" onkeyup="this.value=this.value.replace(/[^0-9]/ig, '')" value="<?php echo $fields['F3SSN1']; ?>" /> - <input type="text" name="F3SSN2" size="2" maxlength="2" onkeyup="this.value=this.value.replace(/[^0-9]/ig, '')" value="<?php echo $fields['F3SSN2']; ?>" /> - <input type="text" name="F3SSN3" size="4" maxlength="4" onkeyup="this.value=this.value.replace(/[^0-9]/ig, '')" value="<?php echo $fields['F3SSN3']; ?>" /> </td> </tr> <tr> <td>Date of Birth</td> <td> <select name="F3DOB1"> <option value="">Month</option> <?php for ($i=1; $i<=12; $i++) { echo "<option value='$i'"; if ($fields["F3DOB1"] == $i) echo " selected"; echo ">$i</option>"; } ?> </select> / <select name="F3DOB2"> <option value="">Day</option> <?php for ($i=1; $i<=31; $i++) { echo "<option value='$i'"; if ($fields["F3DOB2"] == $i) echo " selected"; echo ">$i</option>"; } ?> </select> / <select name="F3DOB3"> <option value="">Year</option> <?php for ($i=date('Y'); $i>=1900; $i--) { echo "<option value='$i'"; if ($fields["F3DOB3"] == $i) echo " selected"; echo ">$i</option>"; } ?> </select> </td> </tr> <tr> <td>Address: </td> <td><input type="text" name="F3Address" value="<?php echo $fields['F3Address']; ?>" /></td> </tr> <tr> <td>City: </td> <td><input type="text" name="F3City" onkeyup="if(!this.value.match(/^([a-z]+\s?)*$/i))this.value=this.value.replace(/[^a-z ]/ig,'').replace(/\s+/g,' ')" value="<?php echo $fields['F3City']; ?>" /></td> </tr> <tr> <td>State: </td> <td><select name="F3State"><option value="">Choose a State</option><?php showOptionsDrop($states_arr, null, true); ?></select></td> </tr> <tr> <td>Zip Code: </td> <td><input type="text" name="F3Zip" size="6" maxlength="5" onkeyup="this.value=this.value.replace(/[^0-9]/ig, '')" value="<?php echo $fields['F3Zip']; ?>" /></td> </tr> <tr> <td>Contact Telephone No: </td> <td>( <input type="text" name="F3Phone1" size="3" maxlength="3" onkeyup="this.value=this.value.replace(/[^0-9]/ig, '')" value="<?php echo $fields['F3Phone1']; ?>" /> ) <input type="text" name="F3Phone2" size="3" maxlength="3" onkeyup="this.value=this.value.replace(/[^0-9]/ig, '')" value="<?php echo $fields['F3Phone2']; ?>" /> - <input type="text" name="F3Phone3" size="4" maxlength="4" onkeyup="this.value=this.value.replace(/[^0-9]/ig, '')" value="<?php echo $fields['F3Phone3']; ?>" /> </td> </tr> <tr> <td>Email:</td> <td><input type="text" name="F3Email" value="<?php echo $fields['F3Email']; ?>" /></td> </tr> </table> <?php } ?> <br /> <?php if ($_SESSION['Num_Fighters'] > "3") { ?> <table class="demoTable"> <tr> <td>First Name: </td> <td><input type="text" name="F4FirstName" onkeyup="if(!this.value.match(/^([a-z]+\s?)*$/i))this.value=this.value.replace(/[^a-z ]/ig,'').replace(/\s+/g,' ')" value="<?php echo $fields['F4FirstName']; ?>" /></td> </tr> <tr> <td>Middle Initial: </td> <td><input type="text" name="F4MI" size="2" maxlength="1" onkeyup="if(!this.value.match(/^([a-z]+\s?)*$/i))this.value=this.value.replace(/[^a-z ]/ig,'').replace(/\s+/g,' ')" value="<?php echo $fields['F4MI']; ?>" /></td> </tr> <tr> <td>Last Name: </td> <td><input type="text" name="F4LastName" onkeyup="if(!this.value.match(/^([a-z]+\s?)*$/i))this.value=this.value.replace(/[^a-z ]/ig,'').replace(/\s+/g,' ')" value="<?php echo $fields['F4LastName']; ?>" /></td> </tr> <tr> <td>Federation No: </td> <td><input type="text" name="F4FedNum" maxlength="10" onkeyup="this.value=this.value.replace(/[^0-9]/ig, '')" value="<?php echo $fields['F4FedNum']; ?>" /></td> </tr> <tr> <td>SSN: </td> <td><input type="text" name="F4SSN1" size="3" maxlength="3" onkeyup="this.value=this.value.replace(/[^0-9]/ig, '')" value="<?php echo $fields['F4SSN1']; ?>" /> - <input type="text" name="F4SSN2" size="2" maxlength="2" onkeyup="this.value=this.value.replace(/[^0-9]/ig, '')" value="<?php echo $fields['F4SSN2']; ?>" /> - <input type="text" name="F4SSN3" size="4" maxlength="4" onkeyup="this.value=this.value.replace(/[^0-9]/ig, '')" value="<?php echo $fields['F4SSN3']; ?>" /> </td> </tr> <tr> <td>Date of Birth</td> <td> <select name="F4DOB1"> <option value="">Month</option> <?php for ($i=1; $i<=12; $i++) { echo "<option value='$i'"; if ($fields["F4DOB1"] == $i) echo " selected"; echo ">$i</option>"; } ?> </select> / <select name="F4DOB2"> <option value="">Day</option> <?php for ($i=1; $i<=31; $i++) { echo "<option value='$i'"; if ($fields["F4DOB2"] == $i) echo " selected"; echo ">$i</option>"; } ?> </select> / <select name="F4DOB3"> <option value="">Year</option> <?php for ($i=date('Y'); $i>=1900; $i--) { echo "<option value='$i'"; if ($fields["F4DOB3"] == $i) echo " selected"; echo ">$i</option>"; } ?> </select> </td> </tr> <tr> <td>Address: </td> <td><input type="text" name="F4Address" value="<?php echo $fields['F4Address']; ?>" /></td> </tr> <tr> <td>City: </td> <td><input type="text" name="F4City" onkeyup="if(!this.value.match(/^([a-z]+\s?)*$/i))this.value=this.value.replace(/[^a-z ]/ig,'').replace(/\s+/g,' ')" value="<?php echo $fields['F4City']; ?>" /></td> </tr> <tr> <td>State: </td> <td><select name="F4State"><option value="">Choose a State</option><?php showOptionsDrop($states_arr, null, true); ?></select></td> </tr> <tr> <td>Zip Code: </td> <td><input type="text" name="F4Zip" size="6" maxlength="5" onkeyup="this.value=this.value.replace(/[^0-9]/ig, '')" value="<?php echo $fields['F4Zip']; ?>" /></td> </tr> <tr> <td>Contact Telephone No: </td> <td>( <input type="text" name="F4Phone1" size="3" maxlength="3" onkeyup="this.value=this.value.replace(/[^0-9]/ig, '')" value="<?php echo $fields['F4Phone1']; ?>" /> ) <input type="text" name="F4Phone2" size="3" maxlength="3" onkeyup="this.value=this.value.replace(/[^0-9]/ig, '')" value="<?php echo $fields['F4Phone2']; ?>" /> - <input type="text" name="F4Phone3" size="4" maxlength="4" onkeyup="this.value=this.value.replace(/[^0-9]/ig, '')" value="<?php echo $fields['F4Phone3']; ?>" /> </td> </tr> <tr> <td>Email:</td> <td><input type="text" name="F4Email" value="<?php echo $fields['F4Email']; ?>" /></td> </tr> </table> <?php } ?> <div align="right"><input class="enrbutton" type="submit" name="submit" value="Continue" /></div> </form> This only goes through 4 athletes and I need it to capture 20. Any ideas? I am forced to keep all 200+ elements in SESSION assuming somebody enrolls 20 athletes.

    Read the article

  • Azure Tables or SQL Azure?

    - by Phil Wright
    I am at the planning stage of a web application that will be hosted in Azure with ASP.NET for the web site and Silverlight within the site for a rich user experience. Should I use Azure Tables or SQL Azure for storing my application data?

    Read the article

  • Storing URLs while Spidering

    - by itemio
    I created a little web spider in python which I'm using to collect URLs. I'm not interested in the content. Right now I'm keeping all the visited URLs in a set in memory, because I don't want my spider to visit URLs twice. Of course that's a very limited way of accomplishing this. So what's the best way to keep track of my visited URLs? Should I use a database? * which one? MySQL, sqlite, postgre? * how should I save the URLs? As a primary key trying to insert every URL before visiting it? Or should I write them to a file? * one file? * multiple files? how should I design the file-structure? I'm sure there are books and a lot of papers on this or similar topics. Can you give me some advice what I should read?

    Read the article

  • store everything only once, smarter

    - by hsmit
    In the digital world a lot is stored multiple times. As a thought experiment or creative challenge I want you to think about making this more efficient and maybe reuse more. Think of the following cases: an mp3 track is downloaded multiple times, copied over various devices on website a login form is often rebuild many times, why not reuse more code? words themselves are used many times questions and answers are accidentally saved at many places in parallel images or photos often describe the same data (Eiffel tower, Golden gate, Taj Mahal) etc etc Are you aware of solutions? Or are you thinking about similar topics? Ideas? Blueprints? I'd love to hear from you!

    Read the article

< Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >