Search Results

Search found 694 results on 28 pages for 'blob'.

Page 10/28 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • Run command with space characters in bash script

    - by ??iu
    I have a file that contains a list of files: 02 of Clubs.eps 02 of Diamonds.eps 02 of Hearts.eps 02 of Spades.eps ... I am attempting to mass-convert these to png format in several sizes. The script I am using to do this is: while read -r line do for i in 80 35 200 do convert $(sed 's/ /\\ /g' <<< Cards/${line}) -size ${i}x${i} ../img/card/$(basename $(tr ' ' '_' <<< ${line} | tr '[A-Z]' '[a-z]') .eps)_${i}.png; done done < card_list.txt However, this doesn't work, apparently trying to split on each word, resulting in the following error output: convert: unable to open image `Cards/02\': No such file or directory @ error/blob.c/OpenBlob/2514. convert: no decode delegate for this image format `Cards/02\' @ error/constitute.c/ReadImage/532. convert: unable to open image `of\': No such file or directory @ error/blob.c/OpenBlob/2514. convert: no decode delegate for this image format `of\' @ error/constitute.c/ReadImage/532. convert: unable to open image `Clubs.eps': No such file or directory @ error/blob.c/OpenBlob/2514. If I change the convert to an echo the result looks right and if I copy a line and run it myself in the shell it works fine: convert Cards/02\ of\ Clubs.eps -size 80x80 ../img/card/02_of_clubs_80.png convert Cards/02\ of\ Clubs.eps -size 35x35 ../img/card/02_of_clubs_35.png convert Cards/02\ of\ Clubs.eps -size 200x200 ../img/card/02_of_clubs_200.png convert Cards/02\ of\ Diamonds.eps -size 80x80 ../img/card/02_of_diamonds_80.png convert Cards/02\ of\ Diamonds.eps -size 35x35 ../img/card/02_of_diamonds_35.png convert Cards/02\ of\ Diamonds.eps -size 200x200 ../img/card/02_of_diamonds_200.png convert Cards/02\ of\ Hearts.eps -size 80x80 ../img/card/02_of_hearts_80.png convert Cards/02\ of\ Hearts.eps -size 35x35 ../img/card/02_of_hearts_35.png convert Cards/02\ of\ Hearts.eps -size 200x200 ../img/card/02_of_hearts_200.png convert Cards/02\ of\ Spades.eps -size 80x80 ../img/card/02_of_spades_80.png UPDATE: Just adding quotes (see below) has the same result as the above, where I had been using sed to add backslashes convert '"'Cards/${line}'"' -size ${i}x${i} ../img/card/$(basename $(tr ' ' '_' <<< ${line} | tr '[A-Z]' '[a-z]') .eps)_${i}.png; I've tried both double and single quotes

    Read the article

  • how to download data which upload to gae ,

    - by zjm1126
    this is my code : import os from google.appengine.ext import webapp from google.appengine.ext.webapp import template from google.appengine.ext.webapp.util import run_wsgi_app from google.appengine.ext import db #from login import htmlPrefix,get_current_user class MyModel(db.Model): blob = db.BlobProperty() class BaseRequestHandler(webapp.RequestHandler): def render_template(self, filename, template_args=None): if not template_args: template_args = {} path = os.path.join(os.path.dirname(__file__), 'templates', filename) self.response.out.write(template.render(path, template_args)) class upload(BaseRequestHandler): def get(self): self.render_template('index.html',) def post(self): file=self.request.get('file') obj = MyModel() obj.blob = db.Blob(file.encode('utf8')) obj.put() self.response.out.write('upload ok') class download(BaseRequestHandler): def get(self): #id=self.request.get('id') o = MyModel.all().get() #self.response.out.write(''.join('%s: %s <br/>' % (a, getattr(o, a)) for a in dir(o))) self.response.out.write(o) application = webapp.WSGIApplication( [ ('/?', upload), ('/download',download), ], debug=True ) def main(): run_wsgi_app(application) if __name__ == "__main__": main() my index.html is : <form action="/" method="post"> <input type="file" name="file" /> <input type="submit" /> </form> and it show : <__main__.MyModel object at 0x02506830> but ,i don't want to see this , i want to download it , how to change my code to run, thanks

    Read the article

  • Azure Grid Computing - Worker Roles as HPC Compute Nodes

    - by JoshReuben
    Overview ·        With HPC 2008 R2 SP1 You can add Azure worker roles as compute nodes in a local Windows HPC Server cluster. ·        The subscription for Windows Azure like any other Azure Service - charged for the time that the role instances are available, as well as for the compute and storage services that are used on the nodes. ·        Win-Win ? - Azure charges the computer hour cost (according to vm size) amortized over a month – so you save on purchasing compute node hardware. Microsoft wins because you need to purchase HPC to have a local head node for managing this compute cluster grid distributed in the cloud. ·        Blob storage is used to hold input & output files of each job. I can see how Parametric Sweep HPC jobs can be supported (where the same job is run multiple times on each node against different input units), but not MPI.NET (where different HPC Job instances function as coordinated agents and conduct master-slave inter-process communication), unless Azure is somehow tunneling MPI communication through inter-WorkerRole Azure Queues. ·        this is not the end of the story for Azure Grid Computing. If MS requires you to purchase a local HPC license (and administrate it), what's to stop a 3rd party from doing this and encapsulating exposing HPC WCF Broker Service to you for managing compute nodes? If MS doesn’t  provide head node as a service, someone else will! Process ·        requires creation of a worker node template that specifies a connection to an existing subscription for Windows Azure + an availability policy for the worker nodes. ·        After worker nodes are added to the cluster, you can start them, which provisions the Windows Azure role instances, and then bring them online to run HPC cluster jobs. ·        A Windows Azure worker role instance runs a HPC compatible Azure guest operating system which runs on the VMs that host your service. The guest operating system is updated monthly. You can choose to upgrade the guest OS for your service automatically each time an update is released - All role instances defined by your service will run on the guest operating system version that you specify. see Windows Azure Guest OS Releases and SDK Compatibility Matrix (http://go.microsoft.com/fwlink/?LinkId=190549). ·        use the hpcpack command to upload file packages and install files to run on the worker nodes. see hpcpack (http://go.microsoft.com/fwlink/?LinkID=205514). Requirements ·        assuming you have an azure subscription account and the HPC head node installed and configured. ·        Install HPC Pack 2008 R2 SP 1 -  see Microsoft HPC Pack 2008 R2 Service Pack 1 Release Notes (http://go.microsoft.com/fwlink/?LinkID=202812). ·        Configure the head node to connect to the Internet - connectivity is provided by the connection of the head node to the enterprise network. You may need to configure a proxy client on the head node. Any cluster network topology (1-5) is supported). ·        Configure the firewall - allow outbound TCP traffic on the following ports: 80,       443, 5901, 5902, 7998, 7999 ·        Note: HPC Server  uses Admin Mode (Elevated Privileges) in Windows Azure to give the service administrator of the subscription the necessary privileges to initialize HPC cluster services on the worker nodes. ·        Obtain a Windows Azure subscription certificate - the Windows Azure subscription must be configured with a public subscription (API) certificate -a valid X.509 certificate with a key size of at least 2048 bits. Generate a self-sign certificate & upload a .cer file to the Windows Azure Portal Account page > Manage my API Certificates link. see Using the Windows Azure Service Management API (http://go.microsoft.com/fwlink/?LinkId=205526). ·        import the certificate with an associated private key on the HPC cluster head node - into the trusted root store of the local computer account. Obtain Windows Azure Connection Information for HPC Server ·        required for each worker node template ·        copy from azure portal - Get from: navigation pane > Hosted Services > Storage Accounts & CDN ·        Subscription ID - a 32-char hex string in the form xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx. In Properties pane. ·        Subscription certificate thumbprint - a 40-char hex string (you need to remove spaces). In Management Certificates > Properties pane. ·        Service name - the value of <ServiceName> configured in the public URL of the service (http://<ServiceName>.cloudapp.net). In Hosted Services > Properties pane. ·        Blob Storage account name - the value of <StorageAccountName> configured in the public URL of the account (http://<StorageAccountName>.blob.core.windows.net). In Storage Accounts > Properties pane. Import the Azure Subscription Certificate on the HPC Head Node ·        enable the services for Windows HPC Server  to authenticate properly with the Windows Azure subscription. ·        use the Certificates MMC snap-in to import the certificate to the Trusted Root Certification Authorities store of the local computer account. The certificate must be in PFX format (.pfx or .p12 file) with a private key that is protected by a password. ·        see Certificates (http://go.microsoft.com/fwlink/?LinkId=163918). ·        To open the certificates snapin: Run > mmc. File > Add/Remove Snap-in > certificates > Computer account > Local Computer ·        To import the certificate via wizard - Certificates > Trusted Root Certification Authorities > Certificates > All Tasks > Import ·        After the certificate is imported, it appears in the details pane in the Certificates snap-in. You can open the certificate to check its status. Configure a Proxy Client on the HPC Head Node ·        the following Windows HPC Server services must be able to communicate over the Internet (through the firewall) with the services for Windows Azure: HPCManagement, HPCScheduler, HPCBrokerWorker. ·        Create a Windows Azure Worker Node Template ·        Edit HPC node templates in HPC Node Template Editor. ·        Specify: 1) Windows Azure subscription connection info (unique service name) for adding a set of worker nodes to the cluster + 2)worker node availability policy – rules for deploying / removing worker role instances in Windows Azure o   HPC Cluster Manager > Configuration > Navigation Pane > Node Templates > Actions pane > New à Create Node Template Wizard or Edit à Node Template Editor o   Choose Node Template Type page - Windows Azure worker node template o   Specify Template Name page – template name & description o   Provide Connection Information page – Azure Subscription ID (text) & Subscription certificate (browse) o   Provide Service Information page - Azure service name + blob storage account name (optionally click Retrieve Connection Information to get list of available from azure – possible LRT). o   Configure Azure Availability Policy page - how Windows Azure worker nodes start / stop (online / offline the worker role instance -  add / remove) – manual / automatic o   for automatic - In the Configure Windows Azure Worker Availability Policy dialog -select days and hours for worker nodes to start / stop. ·        To validate the Windows Azure connection information, on the template's Connection Information tab > Validate connection information. ·        You can upload a file package to the storage account that is specified in the template - eg upload application or service files that will run on the worker nodes. see hpcpack (http://go.microsoft.com/fwlink/?LinkID=205514). Add Azure Worker Nodes to the HPC Cluster ·        Use the Add Node Wizard – specify: 1) the worker node template, 2) The number of worker nodes   (within the quota of role instances in the azure subscription), and 3)           The VM size of the worker nodes : ExtraSmall, Small, Medium, Large, or ExtraLarge.  ·        to add worker nodes of different sizes, must run the Add Node Wizard separately for each size. ·        All worker nodes that are added to the cluster by using a specific worker node template define a set of worker nodes that will be deployed and managed together in Windows Azure when you start the nodes. This includes worker nodes that you add later by using the worker node template and, if you choose, worker nodes of different sizes. You cannot start, stop, or delete individual worker nodes. ·        To add Windows Azure worker nodes o   In HPC Cluster Manager: Node Management > Actions pane > Add Node à Add Node Wizard o   Select Deployment Method page - Add Azure Worker nodes o   Specify New Nodes page - select a worker node template, specify the number and size of the worker nodes ·        After you add worker nodes to the cluster, they are in the Not-Deployed state, and they have a health state of Unapproved. Before you can use the worker nodes to run jobs, you must start them and then bring them online. ·        Worker nodes are numbered consecutively in a naming series that begins with the root name AzureCN – this is non-configurable. Deploying Windows Azure Worker Nodes ·        To deploy the role instances in Windows Azure - start the worker nodes added to the HPC cluster and bring the nodes online so that they are available to run cluster jobs. This can be configured in the HPC Azure Worker Node Template – Azure Availability Policy -  to be automatic or manual. ·        The Start, Stop, and Delete actions take place on the set of worker nodes that are configured by a specific worker node template. You cannot perform one of these actions on a single worker node in a set. You also cannot perform a single action on two sets of worker nodes (specified by two different worker node templates). ·        ·          Starting a set of worker nodes deploys a set of worker role instances in Windows Azure, which can take some time to complete, depending on the number of worker nodes and the performance of Windows Azure. ·        To start worker nodes manually and bring them online o   In HPC Node Management > Navigation Pane > Nodes > List / Heat Map view - select one or more worker nodes. o   Actions pane > Start – in the Start Azure Worker Nodes dialog, select a node template. o   the state of the worker nodes changes from Not Deployed to track the provisioning progress – worker node Details Pane > Provisioning Log tab. o   If there were errors during the provisioning of one or more worker nodes, the state of those nodes is set to Unknown and the node health is set to Unapproved. To determine the reason for the failure, review the provisioning logs for the nodes. o   After a worker node starts successfully, the node state changes to Offline. To bring the nodes online, select the nodes that are in the Offline state > Bring Online. ·        Troubleshooting o   check node template. o   use telnet to test connectivity: telnet <ServiceName>.cloudapp.net 7999 o   check node status - Deployment status information appears in the service account information in the Windows Azure Portal - HPC queries this -  see  node status information for any failed nodes in HPC Node Management. ·        When role instances are deployed, file packages that were previously uploaded to the storage account using the hpcpack command are automatically installed. You can also upload file packages to storage after the worker nodes are started, and then manually install them on the worker nodes. see hpcpack (http://go.microsoft.com/fwlink/?LinkID=205514). ·        to remove a set of role instances in Windows Azure - stop the nodes by using HPC Cluster Manager (apply the Stop action). This deletes the role instances from the service and changes the state of the worker nodes in the HPC cluster to Not Deployed. ·        Each time that you start a set of worker nodes, two proxy role instances (size Small) are configured in Windows Azure to facilitate communication between HPC Cluster Manager and the worker nodes. The proxy role instances are not listed in HPC Cluster Manager after the worker nodes are added. However, the instances appear in the Windows Azure Portal. The proxy role instances incur charges in Windows Azure along with the worker node instances, and they count toward the quota of role instances in the subscription.

    Read the article

  • Backing up SQL Azure

    - by Herve Roggero
    That's it!!! After many days and nights... and an amazing set of challenges, I just released the Enzo Backup for SQL Azure BETA product (http://www.bluesyntax.net). Clearly, that was one of the most challenging projects I have done so far. Why??? Because to create a highly redundant system, expecting failures at all times for an operation that could take anywhere from a couple of minutes to a couple of hours, and still making sure that the operation completes at some point was remarkably challenging. Some routines have more error trapping that actual code... Here are a few things I had to take into account: Exponential Backoff (explained in another post) Dual dynamic determination of number of rows to backup  Dynamic reduction of batch rows used to restore the data Implementation of a flexible BULK Insert API that the tool could use Implementation of a custom Storage REST API to handle automatic retries Automatic data chunking based on blob sizes Compression of data Implementation of the Task Parallel Library at multiple levels including deserialization of Azure Table rows and backup/restore operations Full or Partial Restore operations Implementation of a Ghost class to serialize/deserialize data tables And that's just a partial list... I will explain what some of those mean in future blob posts. A lot of the complexities had to do with implementing a form of retry logic, depending on the resource and the operation.

    Read the article

  • Could not log-in properly but shows no error in joomla

    - by saeha
    This is what I did, I added variables in \libraries\joomla\database\table\user.php: var $img_content= null; //contains the blob type data var $img_name = null; var $img_type = null; then I added this code in \components\com_user\controller.php: $file = JRequest::getVar( 'pic', '', 'files', 'array' ); if(isset($file['name'])) { jimport('joomla.filesystem.file'); $fileName = $file['name']; $tmpName = $file['tmp_name']; $fileSize = $file['size']; $fileType = $file['type']; $fp = fopen($tmpName, 'r'); $content = fread($fp, filesize($tmpName)); //$content = addslashes($content); fclose($fp); $user->set('img_name', $fileName); $user->set('img_type', $fileType); $user->set('img_content', $content); } that works fine, but I found this problem in logging in with the new user with an uploaded photo, other user with an empty img_content field could login properly. What happens is when I log-in using the user with uploaded photo, it's not redirecting properly it just return to log-in, but when i log-in through backend using other user which is super admin, i could see that user which appears as logged in. I started saving the images in the database because I am having problem with the directory when I have uploaded the site. I think the log-in was affected by the blob type data in the database. Could that be the problem? What could be the solution? -saeha

    Read the article

  • You Can&rsquo;t Upload An Empty File To SharePoint 2007 Or SharePoint 2010

    - by Brian Jackett
    The title of this post is pretty self explanatory, but I thought it worth mentioning since I had never run across this rule until just recently.  A few weeks ago I was testing out a new workflow attached to a SharePoint 2007 document library.  I uploaded various file types to ensure all were handled properly.  One of the files I happened to test with was an empty .txt file to which I got the following error.      As you can see from the error message you aren’t allowed to upload a file that is empty.  Fast forward to this week when I was doing some research for my upcoming SharePoint 2010 beta exams.  I remembered that error I got a few weeks ago and decided to try out with SharePoint 2010 as well.  No surprises I got a similar error. Conclusion     Next time you are uploading files to a SharePoint 2007 or 2010 document library, make sure the file is not empty.  Coincidentally when I tweeted about this issue a few friends replied that they had also found this error recently.  I don’t know the internal reasoning why this is prevented but I assume it has something to do with how the blob for the file is stored in the database.  I assume that this would still be the case even if you had Remote Blob Storage (RBS) configured for your farm, but don’t have access to such a farm to confirm.  If anyone reading this does have access and wants to confirm that would be appreciated, just leave a comment.         -Frog Out

    Read the article

  • How do large blobs affect SQL delete performance, and how can I mitigate the impact?

    - by Max Pollack
    I'm currently experiencing a strange issue that my understanding of SQL Server doesn't quite mesh with. We use SQL as our file storage for our internal storage service, and our database has about half a million rows in it. Most of the files (86%) are 1mb or under, but even on fresh copies of our database where we simply populate the table with data for the purposes of a test, it appears that rows with large amounts of data stored in a BLOB frequently cause timeouts when our SQL Server is under load. My understanding of how SQL Server deletes rows is that it's a garbage collection process, i.e. the row is marked as a ghost and the row is later deleted by the ghost cleanup process after the changes are copied to the transaction log. This suggests to me that regardless of the size of the data in the blob, row deletion should be close to instantaneous. However when deleting these rows we are definitely experiencing large numbers of timeouts and astoundingly low performance. In our test data set, its files over 30mb that cause this issue. This is an edge case, we don't frequently encounter these, and even though we're looking into SQL filestream as a solution to some of our problems, we're trying to narrow down where these issues are originating from. We ARE performing our deletes inside of a transaction. We're also performing updates to metadata such as file size stats, but these exist in a separate table away from the file data itself. Hierarchy data is stored in the table that contains the file information. Really, in the end it's not so much what we're doing around the deletes that matters, we just can't find any references to low delete performance on rows that contain a large amount of data in a BLOB. We are trying to determine if this is even an avenue worth exploring, or if it has to be one of our processes around the delete that's causing the issue. Are there any situations in which this could occur? Is it common for a database server to come to the point of complete timeouts when many of these deletes are occurring simultaneously? Is there a way to combat this issue if it exists? (cross-posted from StackOverflow )

    Read the article

  • PowerShell create new Azure VM from uploaded disk (not image)

    - by MikeBaz
    I have a VHD in Azure storage. That VHD is configured as an OS disk through a command like the following: Add-AzureDisk -DiskName $newCode -MediaLocation "http://$script:accountName.blob.core.windows.net/$newCode/$sourceVhdName.vhd" ` -Label $newCode -OS "Windows" I would like to create a new VM pointing at that disk. From what I can tell if I was doing this with an image I would do something like: New-AzureVMConfig -Name $newCode -InstanceSize $instanceSize ` -MediaLocation "http://$script:accountName.blob.core.windows.net/$newCode/$sourceVhdName.vhd" -ImageName $newCode ` | Add-AzureProvisioningConfig -Windows -Password $adminPassword ` | New-AzureVM -ServiceName $newCode However this is wrong for me because I don't have an image - I have a configured VHD that is not sysprepped and can't be. How can I create the VM in PowerShell to point at the existing disk like I can through the portal?

    Read the article

  • PhpMyAdmin Hangs On MySQL Error

    - by user75228
    I'm currently running PhpMyAdmin 4.0.10 (the latest version supporting PHP 4.2.X) on my Amazon EC2 connecting to a MySQL database on RDS. Everything works perfectly fine except actions that return a mysql error message. Whether I perform "any" kind of action that will return a mysql error, Phpmyadmin will hang with the yellow "Loading" box forever without displaying anything. For example, if I perform the following command in MySQL CLI : select * from 123; It instantly returns the following error : ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '123' at line 1 which is completely normal because table 123 doesn't exist. However, if I execute the exact same command in the "SQL" box in Phpmyadmin, after I click "Go" it'll display "Loading" and stops there forever. Has anyone ever encountered this kind of issue with Phpmyadmin? Is this a bug or I have something wrong with my config.inc.php? Any help would be much appreciated. I also noticed these error messages in my apache error logs : /opt/apache/bin/httpd: symbol lookup error: /opt/php/lib/php/extensions/no-debug-non-zts-20060613/iconv.so: undefined symbol: libiconv_open /opt/apache/bin/httpd: symbol lookup error: /opt/php/lib/php/extensions/no-debug-non-zts-20060613/iconv.so: undefined symbol: libiconv_open /opt/apache/bin/httpd: symbol lookup error: /opt/php/lib/php/extensions/no-debug-non-zts-20060613/iconv.so: undefined symbol: libiconv_open Below are my config.inc.php settings : <?php /* vim: set expandtab sw=4 ts=4 sts=4: */ /** * phpMyAdmin sample configuration, you can use it as base for * manual configuration. For easier setup you can use setup/ * * All directives are explained in documentation in the doc/ folder * or at <http://docs.phpmyadmin.net/>. * * @package PhpMyAdmin */ /* * This is needed for cookie based authentication to encrypt password in * cookie */ $cfg['blowfish_secret'] = 'something_random'; /* YOU MUST FILL IN THIS FOR COOKIE AUTH! */ /* * Servers configuration */ $i = 0; /* * First server */ $i++; /* Authentication type */ $cfg['Servers'][$i]['auth_type'] = 'cookie'; /* Server parameters */ $cfg['Servers'][$i]['host'] = '*.rds.amazonaws.com'; $cfg['Servers'][$i]['connect_type'] = 'tcp'; $cfg['Servers'][$i]['compress'] = true; /* Select mysql if your server does not have mysqli */ $cfg['Servers'][$i]['extension'] = 'mysqli'; $cfg['Servers'][$i]['AllowNoPassword'] = false; $cfg['LoginCookieValidity'] = '3600'; /* * phpMyAdmin configuration storage settings. */ /* User used to manipulate with storage */ $cfg['Servers'][$i]['controlhost'] = '*.rds.amazonaws.com'; $cfg['Servers'][$i]['controluser'] = 'pma'; $cfg['Servers'][$i]['controlpass'] = 'password'; /* Storage database and tables */ $cfg['Servers'][$i]['pmadb'] = 'phpmyadmin'; $cfg['Servers'][$i]['bookmarktable'] = 'pma__bookmark'; $cfg['Servers'][$i]['relation'] = 'pma__relation'; $cfg['Servers'][$i]['table_info'] = 'pma__table_info'; $cfg['Servers'][$i]['table_coords'] = 'pma__table_coords'; $cfg['Servers'][$i]['pdf_pages'] = 'pma__pdf_pages'; $cfg['Servers'][$i]['column_info'] = 'pma__column_info'; $cfg['Servers'][$i]['history'] = 'pma__history'; $cfg['Servers'][$i]['table_uiprefs'] = 'pma__table_uiprefs'; $cfg['Servers'][$i]['tracking'] = 'pma__tracking'; $cfg['Servers'][$i]['designer_coords'] = 'pma__designer_coords'; $cfg['Servers'][$i]['userconfig'] = 'pma__userconfig'; $cfg['Servers'][$i]['recent'] = 'pma__recent'; /* Contrib / Swekey authentication */ // $cfg['Servers'][$i]['auth_swekey_config'] = '/etc/swekey-pma.conf'; /* * End of servers configuration */ /* * Directories for saving/loading files from server */ $cfg['UploadDir'] = ''; $cfg['SaveDir'] = ''; /** * Defines whether a user should be displayed a "show all (records)" * button in browse mode or not. * default = false */ //$cfg['ShowAll'] = true; /** * Number of rows displayed when browsing a result set. If the result * set contains more rows, "Previous" and "Next". * default = 30 */ $cfg['MaxRows'] = 50; /** * disallow editing of binary fields * valid values are: * false allow editing * 'blob' allow editing except for BLOB fields * 'noblob' disallow editing except for BLOB fields * 'all' disallow editing * default = blob */ //$cfg['ProtectBinary'] = 'false'; /** * Default language to use, if not browser-defined or user-defined * (you find all languages in the locale folder) * uncomment the desired line: * default = 'en' */ //$cfg['DefaultLang'] = 'en'; //$cfg['DefaultLang'] = 'de'; /** * default display direction (horizontal|vertical|horizontalflipped) */ //$cfg['DefaultDisplay'] = 'vertical'; /** * How many columns should be used for table display of a database? * (a value larger than 1 results in some information being hidden) * default = 1 */ //$cfg['PropertiesNumColumns'] = 2; /** * Set to true if you want DB-based query history.If false, this utilizes * JS-routines to display query history (lost by window close) * * This requires configuration storage enabled, see above. * default = false */ //$cfg['QueryHistoryDB'] = true; /** * When using DB-based query history, how many entries should be kept? * * default = 25 */ //$cfg['QueryHistoryMax'] = 100; /* * You can find more configuration options in the documentation * in the doc/ folder or at <http://docs.phpmyadmin.net/>. */ ?>

    Read the article

  • MySQL Cluster transaction isolation level - READ_COMMITTED

    - by Doori Bar
    I'm learning by mostly reading the documentation. Unfortunately, http://dev.mysql.com/doc/refman/5.1/en/set-transaction.html#isolevel_read-committed doesn't say anything, while it says everything. Confused? Me too. ndb engine supports only "READ_COMMITTED" transaction isolation level. A. It starts by saying "sets and reads its own fresh snapshot", which I translate to: The transaction is having a separated 'zone' which whatever it stores there - is what it reads back. B. While out-side of the transaction, the old-values are unlocked. C. It continues with: "for locking reads" sentence - No idea what it means. Question: they claim only READ_COMMITTED transaction isolation level is supported, but while handling a BLOB or a TEXT, they say the isolation is now "locked for reading" too. So is it a contradiction? can a transaction LOCK for reading just as well while handling something other than BLOB/TEXT? (such as integers)

    Read the article

  • null pointer exception comparing two strings in java.

    - by David
    I got this error message and I'm not quite sure whats wrong: Exception in thread "main" java.lang.NullPointerException at Risk.runTeams(Risk.java:384) at Risk.blobRunner(Risk.java:220) at Risk.genRunner(Risk.java:207) at Risk.main(Risk.java:176) Here is the relevant bits of code (i will draw attention to the line numbers within the error message via comments in the code as well as inputs i put into the program while its running where relevant) public class Risk { ... public static void main (String[]arg) { String CPUcolor = CPUcolor () ; genRunner (CPUcolor) ; //line 176 ... } ... public static void genRunner (String CPUcolor) // when this method runs i select 0 and run blob since its my only option. Theres nothing wrong with this method so long as i know, this is only significant because it takes me to blob runner and because another one of our relelvent line numbers apears. { String[] strats = new String[1] ; strats[0] = "0 - Blob" ; int s = chooseStrat (strats) ; if (s == 0) blobRunner (CPUcolor) ; // this is line 207 } ... public static void blobRunner (String CPUcolor) { System.out.println ("blob Runner") ; int turn = 0 ; boolean gameOver = false ; Dice other = new Dice ("other") ; Dice a1 = new Dice ("a1") ; Dice a2 = new Dice ("a2") ; Dice a3 = new Dice ("a3") ; Dice d1 = new Dice ("d1") ; Dice d2 = new Dice ("d2") ; space (5) ; Territory[] board = makeBoard() ; IdiceRoll (other) ; String[] colors = runTeams(CPUcolor) ; //this is line 220 Card[] deck = Card.createDeck () ; System.out.println (StratUtil.canTurnIn (deck)) ; while (gameOver == false) { idler (deck) ; board = assignTerri (board, colors) ; checkBoard (board, colors) ; } } ... public static String[] runTeams (String CPUcolor) { boolean z = false ; String[] a = new String[6] ; while (z == false) { a = assignTeams () ; printOrder (a) ; boolean CPU = false ; for (int i = 0; i<a.length; i++) { if (a[i].equals(CPUcolor)) CPU = true ; //this is line 384 } if (CPU==false) { System.out.println ("ERROR YOU NEED TO INCLUDE THE COLOR OF THE CPU IN THE TURN ORDER") ; runTeams (CPUcolor) ; } System.out.println ("is this turn order correct? (Y/N)") ; String s = getIns () ; while (!((s.equals ("y")) || (s.equals ("Y")) || (s.equals ("n")) || (s.equals ("N")))) { System.out.println ("try again") ; s = getIns () ; } if (s.equals ("y") || s.equals ("Y") ) z = true ; } return a ; } ... } // This } closes the class The reason i don't think i should be getting a Null:pointerException is because in this line: a[i].equals(CPUcolor) a at index i holds a string and CPUcolor is a string. Both at this point definatly have a value neither is null. Can anyone please tell me whats going wrong?

    Read the article

  • Invalid algorithm specified on Windows 2003 Server only

    - by JL
    I am decoding a file using the following method: string outFileName = zfoFileName.Replace(".zfo", "_tmp.zfo"); FileStream inFile = null; FileStream outFile = null; inFile = File.Open(zfoFileName, FileMode.Open); outFile = File.Create(outFileName); LargeCMS.CMS cms = new LargeCMS.CMS(); cms.Decode(inFile, outFile); This is working fine on my Win 7 dev machine, but on a Windows 2003 server production machine it fails with the following exception: Exception: System.Exception: CryptMsgUpdate error #-2146893816 --- System.ComponentModel.Win32Exception: Invalid algorithm specified --- End of inner exception stack trace --- at LargeCMS.CMS.Decode(FileStream inFile, FileStream outFile) Here are the classes below which I call to do the decoding, if needed I can upload a sample file for decoding, its just strange it works on Win 7, and not on Win2k3 server: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.IO; using System.Security.Cryptography; using System.Security.Cryptography.X509Certificates; using System.Runtime.InteropServices; using System.ComponentModel; namespace LargeCMS { class CMS { // File stream to use in callback function private FileStream m_callbackFile; // Streaming callback function for encoding private Boolean StreamOutputCallback(IntPtr pvArg, IntPtr pbData, int cbData, Boolean fFinal) { // Write all bytes to encoded file Byte[] bytes = new Byte[cbData]; Marshal.Copy(pbData, bytes, 0, cbData); m_callbackFile.Write(bytes, 0, cbData); if (fFinal) { // This is the last piece. Close the file m_callbackFile.Flush(); m_callbackFile.Close(); m_callbackFile = null; } return true; } // Encode CMS with streaming to support large data public void Encode(X509Certificate2 cert, FileStream inFile, FileStream outFile) { // Variables Win32.CMSG_SIGNER_ENCODE_INFO SignerInfo; Win32.CMSG_SIGNED_ENCODE_INFO SignedInfo; Win32.CMSG_STREAM_INFO StreamInfo; Win32.CERT_CONTEXT[] CertContexts = null; Win32.BLOB[] CertBlobs; X509Chain chain = null; X509ChainElement[] chainElements = null; X509Certificate2[] certs = null; RSACryptoServiceProvider key = null; BinaryReader stream = null; GCHandle gchandle = new GCHandle(); IntPtr hProv = IntPtr.Zero; IntPtr SignerInfoPtr = IntPtr.Zero; IntPtr CertBlobsPtr = IntPtr.Zero; IntPtr hMsg = IntPtr.Zero; IntPtr pbPtr = IntPtr.Zero; Byte[] pbData; int dwFileSize; int dwRemaining; int dwSize; Boolean bResult = false; try { // Get data to encode dwFileSize = (int)inFile.Length; stream = new BinaryReader(inFile); pbData = stream.ReadBytes(dwFileSize); // Prepare stream for encoded info m_callbackFile = outFile; // Get cert chain chain = new X509Chain(); chain.Build(cert); chainElements = new X509ChainElement[chain.ChainElements.Count]; chain.ChainElements.CopyTo(chainElements, 0); // Get certs in chain certs = new X509Certificate2[chainElements.Length]; for (int i = 0; i < chainElements.Length; i++) { certs[i] = chainElements[i].Certificate; } // Get context of all certs in chain CertContexts = new Win32.CERT_CONTEXT[certs.Length]; for (int i = 0; i < certs.Length; i++) { CertContexts[i] = (Win32.CERT_CONTEXT)Marshal.PtrToStructure(certs[i].Handle, typeof(Win32.CERT_CONTEXT)); } // Get cert blob of all certs CertBlobs = new Win32.BLOB[CertContexts.Length]; for (int i = 0; i < CertContexts.Length; i++) { CertBlobs[i].cbData = CertContexts[i].cbCertEncoded; CertBlobs[i].pbData = CertContexts[i].pbCertEncoded; } // Get CSP of client certificate key = (RSACryptoServiceProvider)certs[0].PrivateKey; bResult = Win32.CryptAcquireContext( ref hProv, key.CspKeyContainerInfo.KeyContainerName, key.CspKeyContainerInfo.ProviderName, key.CspKeyContainerInfo.ProviderType, 0 ); if (!bResult) { throw new Exception("CryptAcquireContext error #" + Marshal.GetLastWin32Error().ToString(), new Win32Exception(Marshal.GetLastWin32Error())); } // Populate Signer Info struct SignerInfo = new Win32.CMSG_SIGNER_ENCODE_INFO(); SignerInfo.cbSize = Marshal.SizeOf(SignerInfo); SignerInfo.pCertInfo = CertContexts[0].pCertInfo; SignerInfo.hCryptProvOrhNCryptKey = hProv; SignerInfo.dwKeySpec = (int)key.CspKeyContainerInfo.KeyNumber; SignerInfo.HashAlgorithm.pszObjId = Win32.szOID_OIWSEC_sha1; // Populate Signed Info struct SignedInfo = new Win32.CMSG_SIGNED_ENCODE_INFO(); SignedInfo.cbSize = Marshal.SizeOf(SignedInfo); SignedInfo.cSigners = 1; SignerInfoPtr = Marshal.AllocHGlobal(Marshal.SizeOf(SignerInfo)); Marshal.StructureToPtr(SignerInfo, SignerInfoPtr, false); SignedInfo.rgSigners = SignerInfoPtr; SignedInfo.cCertEncoded = CertBlobs.Length; CertBlobsPtr = Marshal.AllocHGlobal(Marshal.SizeOf(CertBlobs[0]) * CertBlobs.Length); for (int i = 0; i < CertBlobs.Length; i++) { Marshal.StructureToPtr(CertBlobs[i], new IntPtr(CertBlobsPtr.ToInt64() + (Marshal.SizeOf(CertBlobs[i]) * i)), false); } SignedInfo.rgCertEncoded = CertBlobsPtr; // Populate Stream Info struct StreamInfo = new Win32.CMSG_STREAM_INFO(); StreamInfo.cbContent = dwFileSize; StreamInfo.pfnStreamOutput = new Win32.StreamOutputCallbackDelegate(StreamOutputCallback); // TODO: CMSG_DETACHED_FLAG // Open message to encode hMsg = Win32.CryptMsgOpenToEncode( Win32.X509_ASN_ENCODING | Win32.PKCS_7_ASN_ENCODING, 0, Win32.CMSG_SIGNED, ref SignedInfo, null, ref StreamInfo ); if (hMsg.Equals(IntPtr.Zero)) { throw new Exception("CryptMsgOpenToEncode error #" + Marshal.GetLastWin32Error().ToString(), new Win32Exception(Marshal.GetLastWin32Error())); } // Process the whole message gchandle = GCHandle.Alloc(pbData, GCHandleType.Pinned); pbPtr = gchandle.AddrOfPinnedObject(); dwRemaining = dwFileSize; dwSize = (dwFileSize < 1024 * 1000 * 100) ? dwFileSize : 1024 * 1000 * 100; while (dwRemaining > 0) { // Update message piece by piece bResult = Win32.CryptMsgUpdate( hMsg, pbPtr, dwSize, (dwRemaining <= dwSize) ? true : false ); if (!bResult) { throw new Exception("CryptMsgUpdate error #" + Marshal.GetLastWin32Error().ToString(), new Win32Exception(Marshal.GetLastWin32Error())); } // Move to the next piece pbPtr = new IntPtr(pbPtr.ToInt64() + dwSize); dwRemaining -= dwSize; if (dwRemaining < dwSize) { dwSize = dwRemaining; } } } finally { // Clean up if (gchandle.IsAllocated) { gchandle.Free(); } if (stream != null) { stream.Close(); } if (m_callbackFile != null) { m_callbackFile.Close(); } if (!CertBlobsPtr.Equals(IntPtr.Zero)) { Marshal.FreeHGlobal(CertBlobsPtr); } if (!SignerInfoPtr.Equals(IntPtr.Zero)) { Marshal.FreeHGlobal(SignerInfoPtr); } if (!hProv.Equals(IntPtr.Zero)) { Win32.CryptReleaseContext(hProv, 0); } if (!hMsg.Equals(IntPtr.Zero)) { Win32.CryptMsgClose(hMsg); } } } // Decode CMS with streaming to support large data public void Decode(FileStream inFile, FileStream outFile) { // Variables Win32.CMSG_STREAM_INFO StreamInfo; Win32.CERT_CONTEXT SignerCertContext; BinaryReader stream = null; GCHandle gchandle = new GCHandle(); IntPtr hMsg = IntPtr.Zero; IntPtr pSignerCertInfo = IntPtr.Zero; IntPtr pSignerCertContext = IntPtr.Zero; IntPtr pbPtr = IntPtr.Zero; IntPtr hStore = IntPtr.Zero; Byte[] pbData; Boolean bResult = false; int dwFileSize; int dwRemaining; int dwSize; int cbSignerCertInfo; try { // Get data to decode dwFileSize = (int)inFile.Length; stream = new BinaryReader(inFile); pbData = stream.ReadBytes(dwFileSize); // Prepare stream for decoded info m_callbackFile = outFile; // Populate Stream Info struct StreamInfo = new Win32.CMSG_STREAM_INFO(); StreamInfo.cbContent = dwFileSize; StreamInfo.pfnStreamOutput = new Win32.StreamOutputCallbackDelegate(StreamOutputCallback); // Open message to decode hMsg = Win32.CryptMsgOpenToDecode( Win32.X509_ASN_ENCODING | Win32.PKCS_7_ASN_ENCODING, 0, 0, IntPtr.Zero, IntPtr.Zero, ref StreamInfo ); if (hMsg.Equals(IntPtr.Zero)) { throw new Exception("CryptMsgOpenToDecode error #" + Marshal.GetLastWin32Error().ToString(), new Win32Exception(Marshal.GetLastWin32Error())); } // Process the whole message gchandle = GCHandle.Alloc(pbData, GCHandleType.Pinned); pbPtr = gchandle.AddrOfPinnedObject(); dwRemaining = dwFileSize; dwSize = (dwFileSize < 1024 * 1000 * 100) ? dwFileSize : 1024 * 1000 * 100; while (dwRemaining > 0) { // Update message piece by piece bResult = Win32.CryptMsgUpdate( hMsg, pbPtr, dwSize, (dwRemaining <= dwSize) ? true : false ); if (!bResult) { throw new Exception("CryptMsgUpdate error #" + Marshal.GetLastWin32Error().ToString(), new Win32Exception(Marshal.GetLastWin32Error())); } // Move to the next piece pbPtr = new IntPtr(pbPtr.ToInt64() + dwSize); dwRemaining -= dwSize; if (dwRemaining < dwSize) { dwSize = dwRemaining; } } // Get signer certificate info cbSignerCertInfo = 0; bResult = Win32.CryptMsgGetParam( hMsg, Win32.CMSG_SIGNER_CERT_INFO_PARAM, 0, IntPtr.Zero, ref cbSignerCertInfo ); if (!bResult) { throw new Exception("CryptMsgGetParam error #" + Marshal.GetLastWin32Error().ToString(), new Win32Exception(Marshal.GetLastWin32Error())); } pSignerCertInfo = Marshal.AllocHGlobal(cbSignerCertInfo); bResult = Win32.CryptMsgGetParam( hMsg, Win32.CMSG_SIGNER_CERT_INFO_PARAM, 0, pSignerCertInfo, ref cbSignerCertInfo ); if (!bResult) { throw new Exception("CryptMsgGetParam error #" + Marshal.GetLastWin32Error().ToString(), new Win32Exception(Marshal.GetLastWin32Error())); } // Open a cert store in memory with the certs from the message hStore = Win32.CertOpenStore( Win32.CERT_STORE_PROV_MSG, Win32.X509_ASN_ENCODING | Win32.PKCS_7_ASN_ENCODING, IntPtr.Zero, 0, hMsg ); if (hStore.Equals(IntPtr.Zero)) { throw new Exception("CertOpenStore error #" + Marshal.GetLastWin32Error().ToString(), new Win32Exception(Marshal.GetLastWin32Error())); } // Find the signer's cert in the store pSignerCertContext = Win32.CertGetSubjectCertificateFromStore( hStore, Win32.X509_ASN_ENCODING | Win32.PKCS_7_ASN_ENCODING, pSignerCertInfo ); if (pSignerCertContext.Equals(IntPtr.Zero)) { throw new Exception("CertGetSubjectCertificateFromStore error #" + Marshal.GetLastWin32Error().ToString(), new Win32Exception(Marshal.GetLastWin32Error())); } // Set message for verifying SignerCertContext = (Win32.CERT_CONTEXT)Marshal.PtrToStructure(pSignerCertContext, typeof(Win32.CERT_CONTEXT)); bResult = Win32.CryptMsgControl( hMsg, 0, Win32.CMSG_CTRL_VERIFY_SIGNATURE, SignerCertContext.pCertInfo ); if (!bResult) { throw new Exception("CryptMsgControl error #" + Marshal.GetLastWin32Error().ToString(), new Win32Exception(Marshal.GetLastWin32Error())); } } finally { // Clean up if (gchandle.IsAllocated) { gchandle.Free(); } if (!pSignerCertContext.Equals(IntPtr.Zero)) { Win32.CertFreeCertificateContext(pSignerCertContext); } if (!pSignerCertInfo.Equals(IntPtr.Zero)) { Marshal.FreeHGlobal(pSignerCertInfo); } if (!hStore.Equals(IntPtr.Zero)) { Win32.CertCloseStore(hStore, Win32.CERT_CLOSE_STORE_FORCE_FLAG); } if (stream != null) { stream.Close(); } if (m_callbackFile != null) { m_callbackFile.Close(); } if (!hMsg.Equals(IntPtr.Zero)) { Win32.CryptMsgClose(hMsg); } } } } } and using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Runtime.InteropServices; using System.Security.Cryptography.X509Certificates; using System.ComponentModel; using System.Security.Cryptography; namespace LargeCMS { class Win32 { #region "CONSTS" public const int X509_ASN_ENCODING = 0x00000001; public const int PKCS_7_ASN_ENCODING = 0x00010000; public const int CMSG_SIGNED = 2; public const int CMSG_DETACHED_FLAG = 0x00000004; public const int AT_KEYEXCHANGE = 1; public const int AT_SIGNATURE = 2; public const String szOID_OIWSEC_sha1 = "1.3.14.3.2.26"; public const int CMSG_CTRL_VERIFY_SIGNATURE = 1; public const int CMSG_CERT_PARAM = 12; public const int CMSG_SIGNER_CERT_INFO_PARAM = 7; public const int CERT_STORE_PROV_MSG = 1; public const int CERT_CLOSE_STORE_FORCE_FLAG = 1; #endregion #region "STRUCTS" [StructLayout(LayoutKind.Sequential)] public struct CRYPT_ALGORITHM_IDENTIFIER { public String pszObjId; BLOB Parameters; } [StructLayout(LayoutKind.Sequential)] public struct CERT_ID { public int dwIdChoice; public BLOB IssuerSerialNumberOrKeyIdOrHashId; } [StructLayout(LayoutKind.Sequential)] public struct CMSG_SIGNER_ENCODE_INFO { public int cbSize; public IntPtr pCertInfo; public IntPtr hCryptProvOrhNCryptKey; public int dwKeySpec; public CRYPT_ALGORITHM_IDENTIFIER HashAlgorithm; public IntPtr pvHashAuxInfo; public int cAuthAttr; public IntPtr rgAuthAttr; public int cUnauthAttr; public IntPtr rgUnauthAttr; public CERT_ID SignerId; public CRYPT_ALGORITHM_IDENTIFIER HashEncryptionAlgorithm; public IntPtr pvHashEncryptionAuxInfo; } [StructLayout(LayoutKind.Sequential)] public struct CERT_CONTEXT { public int dwCertEncodingType; public IntPtr pbCertEncoded; public int cbCertEncoded; public IntPtr pCertInfo; public IntPtr hCertStore; } [StructLayout(LayoutKind.Sequential)] public struct BLOB { public int cbData; public IntPtr pbData; } [StructLayout(LayoutKind.Sequential)] public struct CMSG_SIGNED_ENCODE_INFO { public int cbSize; public int cSigners; public IntPtr rgSigners; public int cCertEncoded; public IntPtr rgCertEncoded; public int cCrlEncoded; public IntPtr rgCrlEncoded; public int cAttrCertEncoded; public IntPtr rgAttrCertEncoded; } [StructLayout(LayoutKind.Sequential)] public struct CMSG_STREAM_INFO { public int cbContent; public StreamOutputCallbackDelegate pfnStreamOutput; public IntPtr pvArg; } #endregion #region "DELEGATES" public delegate Boolean StreamOutputCallbackDelegate(IntPtr pvArg, IntPtr pbData, int cbData, Boolean fFinal); #endregion #region "API" [DllImport("advapi32.dll", CharSet = CharSet.Auto, SetLastError = true)] public static extern Boolean CryptAcquireContext( ref IntPtr hProv, String pszContainer, String pszProvider, int dwProvType, int dwFlags ); [DllImport("Crypt32.dll", SetLastError = true)] public static extern IntPtr CryptMsgOpenToEncode( int dwMsgEncodingType, int dwFlags, int dwMsgType, ref CMSG_SIGNED_ENCODE_INFO pvMsgEncodeInfo, String pszInnerContentObjID, ref CMSG_STREAM_INFO pStreamInfo ); [DllImport("Crypt32.dll", SetLastError = true)] public static extern IntPtr CryptMsgOpenToDecode( int dwMsgEncodingType, int dwFlags, int dwMsgType, IntPtr hCryptProv, IntPtr pRecipientInfo, ref CMSG_STREAM_INFO pStreamInfo ); [DllImport("Crypt32.dll", SetLastError = true)] public static extern Boolean CryptMsgClose( IntPtr hCryptMsg ); [DllImport("Crypt32.dll", SetLastError = true)] public static extern Boolean CryptMsgUpdate( IntPtr hCryptMsg, Byte[] pbData, int cbData, Boolean fFinal ); [DllImport("Crypt32.dll", SetLastError = true)] public static extern Boolean CryptMsgUpdate( IntPtr hCryptMsg, IntPtr pbData, int cbData, Boolean fFinal ); [DllImport("Crypt32.dll", SetLastError = true)] public static extern Boolean CryptMsgGetParam( IntPtr hCryptMsg, int dwParamType, int dwIndex, IntPtr pvData, ref int pcbData ); [DllImport("Crypt32.dll", SetLastError = true)] public static extern Boolean CryptMsgControl( IntPtr hCryptMsg, int dwFlags, int dwCtrlType, IntPtr pvCtrlPara ); [DllImport("advapi32.dll", SetLastError = true)] public static extern Boolean CryptReleaseContext( IntPtr hProv, int dwFlags ); [DllImport("Crypt32.dll", SetLastError = true)] public static extern IntPtr CertCreateCertificateContext( int dwCertEncodingType, IntPtr pbCertEncoded, int cbCertEncoded ); [DllImport("Crypt32.dll", SetLastError = true)] public static extern Boolean CertFreeCertificateContext( IntPtr pCertContext ); [DllImport("Crypt32.dll", SetLastError = true)] public static extern IntPtr CertOpenStore( int lpszStoreProvider, int dwMsgAndCertEncodingType, IntPtr hCryptProv, int dwFlags, IntPtr pvPara ); [DllImport("Crypt32.dll", SetLastError = true)] public static extern IntPtr CertGetSubjectCertificateFromStore( IntPtr hCertStore, int dwCertEncodingType, IntPtr pCertId ); [DllImport("Crypt32.dll", SetLastError = true)] public static extern IntPtr CertCloseStore( IntPtr hCertStore, int dwFlags ); #endregion } }

    Read the article

  • Mysql server crashes Innodb

    - by martin
    Today we got some DB crash. The DB is InnoDB. At firstin log: 120404 10:57:40 InnoDB: ERROR: the age of the last checkpoint is 9433732, InnoDB: which exceeds the log group capacity 9433498. InnoDB: If you are using big BLOB or TEXT rows, you must set the InnoDB: combined size of log files at least 10 times bigger than the InnoDB: largest such row. 120404 10:58:48 InnoDB: ERROR: the age of the last checkpoint is 9825579, InnoDB: which exceeds the log group capacity 9433498. InnoDB: If you are using big BLOB or TEXT rows, you must set the InnoDB: combined size of log files at least 10 times bigger than the InnoDB: largest such row. 120404 10:59:04 InnoDB: ERROR: the age of the last checkpoint is 13992586, InnoDB: which exceeds the log group capacity 9433498. InnoDB: If you are using big BLOB or TEXT rows, you must set the InnoDB: combined size of log files at least 10 times bigger than the InnoDB: largest such row. 120404 10:59:20 InnoDB: ERROR: the age of the last checkpoint is 18059881, InnoDB: which exceeds the log group capacity 9433498. InnoDB: If you are using big BLOB or TEXT rows, you must set the InnoDB: combined size of log files at least 10 times bigger than the InnoDB: largest such row. after manual service stop and normal PC restart : 120404 11:12:35 InnoDB: Error: page 3473451 log sequence number 105 802365904 InnoDB: is in the future! Current system log sequence number 105 796344770. InnoDB: Your database may be corrupt or you may have copied the InnoDB InnoDB: tablespace but not the InnoDB log files. See InnoDB: http://dev.mysql.com/doc/refman/5.1/en/forcing-recovery.html InnoDB: for more information. InnoDB: 1 transaction(s) which must be rolled back or cleaned up InnoDB: in total 1 row operations to undo InnoDB: Trx id counter is 0 1103869440 120404 11:12:37 InnoDB: Error: page 0 log sequence number 105 834817616 InnoDB: is in the future! Current system log sequence number 105 796344770. InnoDB: Your database may be corrupt or you may have copied the InnoDB InnoDB: tablespace but not the InnoDB log files. See InnoDB: http://dev.mysql.com/doc/refman/5.1/en/forcing-recovery.html InnoDB: for more information. InnoDB: Last MySQL binlog file position 0 3710603, file name .\mysql-bin.000336 InnoDB: Starting in background the rollback of uncommitted transactions 120404 11:12:38 InnoDB: Rolling back trx with id 0 1103866646, 1 rows to undo 120404 11:12:38 InnoDB: Started; log sequence number 105 796344770 120404 11:12:38 InnoDB: Error: page 2097163 log sequence number 105 803249754 InnoDB: is in the future! Current system log sequence number 105 796344770. InnoDB: Your database may be corrupt or you may have copied the InnoDB InnoDB: tablespace but not the InnoDB log files. See InnoDB: http://dev.mysql.com/doc/refman/5.1/en/forcing-recovery.html InnoDB: for more information. InnoDB: Rolling back of trx id 0 1103866646 completed 120404 11:12:39 InnoDB: Rollback of non-prepared transactions completed 120404 11:12:39 [Note] Event Scheduler: Loaded 0 events 120404 11:12:39 [Note] wampmysqld: ready for connections. Version: '5.1.53-community' socket: '' port: 3306 MySQL Community Server (GPL) 120404 11:12:40 InnoDB: Error: page 2097162 log sequence number 105 803215859 InnoDB: is in the future! Current system log sequence number 105 796345097. InnoDB: Your database may be corrupt or you may have copied the InnoDB InnoDB: tablespace but not the InnoDB log files. See InnoDB: http://dev.mysql.com/doc/refman/5.1/en/forcing-recovery.html InnoDB: for more information. 120404 11:12:40 InnoDB: Error: page 2097156 log sequence number 105 803181181 InnoDB: is in the future! Current system log sequence number 105 796345097. InnoDB: Your database may be corrupt or you may have copied the InnoDB InnoDB: tablespace but not the InnoDB log files. See InnoDB: http://dev.mysql.com/doc/refman/5.1/en/forcing-recovery.html InnoDB: for more information. 120404 11:12:40 InnoDB: Error: page 2097157 log sequence number 105 803193066 InnoDB: is in the future! Current system log sequence number 105 796345097. InnoDB: Your database may be corrupt or you may have copied the InnoDB InnoDB: tablespace but not the InnoDB log files. See InnoDB: http://dev.mysql.com/doc/refman/5.1/en/forcing-recovery.html InnoDB: for more information. when tried to recover data get : key_buffer_size=16777216 read_buffer_size=262144 max_used_connections=0 max_threads=151 threads_connected=0 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 133725 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. thd: 0x0 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... 0000000140262AFC mysqld.exe!?check_next_symbol@Gis_read_stream@@QEAA_ND@Z() 00000001402AAFA1 mysqld.exe!?check_next_symbol@Gis_read_stream@@QEAA_ND@Z() 00000001402AB33A mysqld.exe!?check_next_symbol@Gis_read_stream@@QEAA_ND@Z() 0000000140268219 mysqld.exe!?check_next_symbol@Gis_read_stream@@QEAA_ND@Z() 000000014027DB13 mysqld.exe!?check_next_symbol@Gis_read_stream@@QEAA_ND@Z() 00000001402A909F mysqld.exe!?check_next_symbol@Gis_read_stream@@QEAA_ND@Z() 00000001402A91B6 mysqld.exe!?check_next_symbol@Gis_read_stream@@QEAA_ND@Z() 000000014025B9B0 mysqld.exe!?check_next_symbol@Gis_read_stream@@QEAA_ND@Z() 000000014022F9C6 mysqld.exe!?check_next_symbol@Gis_read_stream@@QEAA_ND@Z() 0000000140219979 mysqld.exe!?check_next_symbol@Gis_read_stream@@QEAA_ND@Z() 000000014009ABCF mysqld.exe!?ha_initialize_handlerton@@YAHPEAUst_plugin_int@@@Z() 000000014003308C mysqld.exe!?plugin_lock_by_name@@YAPEAUst_plugin_int@@PEAVTHD@@PEBUst_mysql_lex_string@@H@Z() 00000001400375A9 mysqld.exe!?plugin_init@@YAHPEAHPEAPEADH@Z() 000000014001DACE mysqld.exe!handle_shutdown() 000000014001E285 mysqld.exe!?win_main@@YAHHPEAPEAD@Z() 000000014001E632 mysqld.exe!?mysql_service@@YAHPEAX@Z() 00000001402EA477 mysqld.exe!?check_next_symbol@Gis_read_stream@@QEAA_ND@Z() 00000001402EA545 mysqld.exe!?check_next_symbol@Gis_read_stream@@QEAA_ND@Z() 000000007712652D kernel32.dll!BaseThreadInitThunk() 000000007725C521 ntdll.dll!RtlUserThreadStart() The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains information that should help you find out what is causing the crash. 120404 14:17:49 [Note] Plugin 'FEDERATED' is disabled. 120404 14:17:49 [Warning] option 'innodb-force-recovery': signed value 8 adjusted to 6 InnoDB: The user has set SRV_FORCE_NO_LOG_REDO on InnoDB: Skipping log redo InnoDB: Error: trying to access page number 4290979199 in space 0, InnoDB: space name .\ibdata1, InnoDB: which is outside the tablespace bounds. InnoDB: Byte offset 0, len 16384, i/o type 10. InnoDB: If you get this error at mysqld startup, please check that InnoDB: your my.cnf matches the ibdata files that you have in the InnoDB: MySQL server. 120404 14:17:52 InnoDB: Assertion failure in thread 3928 in file .\fil\fil0fil.c lin23 InnoDB: We intentionally generate a memory trap. InnoDB: Submit a detailed bug report to http://bugs.mysql.com. InnoDB: If you get repeated assertion failures or crashes, even InnoDB: immediately after the mysqld startup, there may be InnoDB: corruption in the InnoDB tablespace. Please refer to InnoDB: http://dev.mysql.com/doc/refman/5.1/en/forcing-recovery.html InnoDB: about forcing recovery. 120404 14:17:52 - mysqld got exception 0xc0000005 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=16777216 read_buffer_size=262144 max_used_connections=0 max_threads=151 threads_connected=0 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 133725 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. thd: 0x0 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... 0000000140262AFC mysqld.exe!?check_next_symbol@Gis_read_stream@@QEAA_ND@Z() 00000001402AAFA1 mysqld.exe!?check_next_symbol@Gis_read_stream@@QEAA_ND@Z() 00000001402AB33A mysqld.exe!?check_next_symbol@Gis_read_stream@@QEAA_ND@Z() 0000000140268219 mysqld.exe!?check_next_symbol@Gis_read_stream@@QEAA_ND@Z() 000000014027DB13 mysqld.exe!?check_next_symbol@Gis_read_stream@@QEAA_ND@Z() 00000001402A909F mysqld.exe!?check_next_symbol@Gis_read_stream@@QEAA_ND@Z() 00000001402A91B6 mysqld.exe!?check_next_symbol@Gis_read_stream@@QEAA_ND@Z() 000000014025B9B0 mysqld.exe!?check_next_symbol@Gis_read_stream@@QEAA_ND@Z() 000000014022F9C6 mysqld.exe!?check_next_symbol@Gis_read_stream@@QEAA_ND@Z() 0000000140219979 mysqld.exe!?check_next_symbol@Gis_read_stream@@QEAA_ND@Z() 000000014009ABCF mysqld.exe!?ha_initialize_handlerton@@YAHPEAUst_plugin_int@@@Z() 000000014003308C mysqld.exe!?plugin_lock_by_name@@YAPEAUst_plugin_int@@PEAVTHD@@PEBUst_mysql_lex_string@@H@Z() 00000001400375A9 mysqld.exe!?plugin_init@@YAHPEAHPEAPEADH@Z() 000000014001DACE mysqld.exe!handle_shutdown() 000000014001E285 mysqld.exe!?win_main@@YAHHPEAPEAD@Z() 000000014001E632 mysqld.exe!?mysql_service@@YAHPEAX@Z() 00000001402EA477 mysqld.exe!?check_next_symbol@Gis_read_stream@@QEAA_ND@Z() 00000001402EA545 mysqld.exe!?check_next_symbol@Gis_read_stream@@QEAA_ND@Z() 000000007712652D kernel32.dll!BaseThreadInitThunk() 000000007725C521 ntdll.dll!RtlUserThreadStart() The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains information that should help you find out what is causing the crash. Any suggestion how to get DB working ????

    Read the article

  • objective-c uncompress/inflate a NSData object which contains a gzipped string

    - by user141146
    Hi, I'm using objective-c (iPhone) to read a sqlite table that contains blob data. The blob data is actually a gzipped string. The blob data is read into memory as an NSData object I then use a method (gzipInflate) which I grabbed from here (http://www.cocoadev.com/index.pl?NSDataCategory) -- see method below. However, the gzipInflate method is returning nil. If I step through the gzipInflate method, I can see that the status returned near the bottom of the method is -3, which I assume is some type of error (Z_STREAM_END = 1 and Z_OK = 0, but I don't know precisely what a status of -3 is). Consequently, the function returns nil even though the input NSData is 841 bytes in length. Any thoughts on what I'm doing wrong? Is there something wrong with the method? Something wrong with how I'm using the method? Any thoughts on how to test this? Thanks for any help! - (NSData *)gzipInflate { // NSData *compressed_data = [self.description dataUsingEncoding: NSUTF16StringEncoding]; NSData *compressed_data = self.description; if ([compressed_data length] == 0) return compressed_data; unsigned full_length = [compressed_data length]; unsigned half_length = [compressed_data length] / 2; NSMutableData *decompressed = [NSMutableData dataWithLength: full_length + half_length]; BOOL done = NO; int status; z_stream strm; strm.next_in = (Bytef *)[compressed_data bytes]; strm.avail_in = [compressed_data length]; strm.total_out = 0; strm.zalloc = Z_NULL; strm.zfree = Z_NULL; if (inflateInit2(&strm, (15+32)) != Z_OK) return nil; while (!done) { // Make sure we have enough room and reset the lengths. if (strm.total_out >= [decompressed length]) [decompressed increaseLengthBy: half_length]; strm.next_out = [decompressed mutableBytes] + strm.total_out; strm.avail_out = [decompressed length] - strm.total_out; // Inflate another chunk. status = inflate (&strm, Z_SYNC_FLUSH); NSLog(@"zstreamend = %d", Z_STREAM_END); if (status == Z_STREAM_END) done = YES; else if (status != Z_OK) break; //status here actually is -3 } if (inflateEnd (&strm) != Z_OK) return nil; // Set real length. if (done) { [decompressed setLength: strm.total_out]; return [NSData dataWithData: decompressed]; } else return nil; }

    Read the article

  • retreving long text (CLOB) using CFQuery

    - by CFUser
    I am using CFQuery to retrieve the CLOB field from Oracle DB. If the CLOB filed contains the Data less than ~ 8000, then I can see retrieved the value ( the o/p), however If the value in CLOB field size is more than 8000 chars, then its not retrieving the value. in <cfdump> i can see the query retrieved as 'empty String' though the value exists in Oracle DB. I am using the Oracle Driver in CFadim console enabled 'Enable long text retrieval (CLOB).' and 'Enable binary large object retrieval (BLOB). ' set 'Long Text Buffer (chr)' and 'Blob Buffer(bytes) ' values to 6400000 any suggestions to retrieve the full text?

    Read the article

  • updateBinaryStream on ResultSet with FileStream

    - by Lieven Cardoen
    If I try to update a FileStream column I get following error: com.microsoft.sqlserver.jdbc.SQLServerException: The result set is not updatable. Code: System.out.print("Now, let's update the filestream data."); FileInputStream iStream = new FileInputStream("c:\\testFile.mp3"); rs.updateBinaryStream(2, iStream, -1); rs.updateRow(); iStream.close(); Why is this? Table in Sql Server 2008: CREATE TABLE [BinaryAssets].[BinaryAssetFiles]( [BinaryAssetFileId] [int] IDENTITY(1,1) NOT NULL, [FileStreamId] [uniqueidentifier] ROWGUIDCOL NOT NULL, [Blob] [varbinary](max) FILESTREAM NULL, [HashCode] [nvarchar](100) NOT NULL, [Size] [int] NOT NULL, [BinaryAssetExtensionId] [int] NOT NULL, Query used in Java: String strCmd = "select BinaryAssetFileId, Blob from BinaryAssets.BinaryAssetFiles where BinaryAssetFileId = 1"; stmt = con.createStatement(); rs = stmt.executeQuery(strCmd);

    Read the article

  • Using Active Objects and BLOBs

    - by Andrew L.
    I am in a group of people who are creating a Defect Tracking program as a project. We have been using Active Objects and have run into some issues. Currently maximum file size for the blob is approx. 2Mb but we want to be able to increase it up to 2Gb. We currently have been looking at many sites and have not been able to find out how to increase the size. We are currently storing the blob as an array of bytes. Our current error says, Packet for Query is too large? We don't know how to set the variable, and we don't know how to set it using AO. We are programming this in Java, too. We are wondering if anyone has a solution to this problem. Thanks for the Help.

    Read the article

  • Help understanding some OpenGL stuff

    - by shinjuo
    I am working with some code to create a triangle that moves with arrow keys. I want to create a second object that moves independently. This is where I am having trouble, I have created the second actor, but cannot get it to move. There is too much code to post it all so I will just post a little and see if anyone can help at all. ogl_test.cpp #include "platform.h" #include "srt/scheduler.h" #include "model.h" #include "controller.h" #include "model_module.h" #include "graphics_module.h" class blob : public actor { public: blob(float x, float y) : actor(math::vector2f(x, y)) { } void render() { transform(); glBegin(GL_TRIANGLES); glVertex3f(0.25f, 0.0f, -5.0f); glVertex3f(-.5f, 0.25f, -5.0f); glVertex3f(-.5f, -0.25f, -5.0f); glEnd(); end_transform(); } void update(controller& c, float dt) { if (c.left_key) { rho += pi / 9.0f * dt; c.left_key = false; } if (c.right_key) { rho -= pi / 9.0f * dt; c.right_key = false; } if (c.up_key) { v += .1f * dt; c.up_key = false; } if (c.down_key) { v -= .1f * dt; if (v < 0.0) { v = 0.0; } c.down_key = false; } actor::update(c, dt); } }; class enemyOne : public actor { public: enemyOne(float x, float y) : actor(math::vector2f(x, y)) { } void render() { transform(); glBegin(GL_TRIANGLES); glVertex3f(0.25f, 0.0f, -5.0f); glVertex3f(-.5f, 0.25f, -5.0f); glVertex3f(-.5f, -0.25f, -5.0f); glEnd(); end_transform(); } void update(controller& c, float dt) { if (c.left_key) { rho += pi / 9.0f * dt; c.left_key = false; } if (c.right_key) { rho -= pi / 9.0f * dt; c.right_key = false; } if (c.up_key) { v += .1f * dt; c.up_key = false; } if (c.down_key) { v -= .1f * dt; if (v < 0.0) { v = 0.0; } c.down_key = false; } actor::update(c, dt); } }; int APIENTRY WinMain( HINSTANCE hInstance, HINSTANCE hPrevInstance, char* lpCmdLine, int nCmdShow ) { model m; controller control(m); srt::scheduler scheduler(33); srt::frame* model_frame = new srt::frame(scheduler.timer(), 0, 1, 2); srt::frame* render_frame = new srt::frame(scheduler.timer(), 1, 1, 2); model_frame->add(new model_module(m, control)); render_frame->add(new graphics_module(m)); scheduler.add(model_frame); scheduler.add(render_frame); blob* prime = new blob(0.0f, 0.0f); m.add(prime); m.set_prime(prime); enemyOne* primeTwo = new enemyOne(2.0f, 0.0f); m.add(primeTwo); m.set_prime(primeTwo); scheduler.start(); control.start(); return 0; } model.h #include <vector> #include "vec.h" const double pi = 3.14159265358979323; class controller; using math::vector2f; class actor { public: vector2f P; float theta; float v; float rho; actor(const vector2f& init_location) : P(init_location), rho(0.0), v(0.0), theta(0.0) { } virtual void render() = 0; virtual void update(controller&, float dt) { float v1 = v; float theta1 = theta + rho * dt; vector2f P1 = P + v1 * vector2f(cos(theta1), sin(theta1)); if (P1.x < -4.5f || P1.x > 4.5f) { P1.x = -P1.x; } if (P1.y < -4.5f || P1.y > 4.5f) { P1.y = -P1.y; } v = v1; theta = theta1; P = P1; } protected: void transform() { glPushMatrix(); glTranslatef(P.x, P.y, 0.0f); glRotatef(theta * 180.0f / pi, 0.0f, 0.0f, 1.0f); //Rotate about the z-axis } void end_transform() { glPopMatrix(); } }; class model { private: typedef std::vector<actor*> actor_vector; actor_vector actors; public: actor* _prime; model() { } void add(actor* a) { actors.push_back(a); } void set_prime(actor* a) { _prime = a; } void update(controller& control, float dt) { for (actor_vector::iterator i = actors.begin(); i != actors.end(); ++i) { (*i)->update(control, dt); } } void render() { for (actor_vector::iterator i = actors.begin(); i != actors.end(); ++i) { (*i)->render(); } } };

    Read the article

  • What to pass to UserType, BlobType.setPreparedStatement session parameter

    - by dlots
    http://blog.xebia.com/2009/11/09/understanding-and-writing-hibernate-user-types/ I am attempting to defined a customer serialization UserType that mimics, the XStreamUserType referenced and provided here: http://code.google.com/p/aphillips/source/browse/commons-hibernate-usertype/trunk/src/main/java/com/qrmedia/commons/persistence/hibernate/usertype/XStreamableUserType.java My serializer outputs a bytearray that should presumably written to a Blob. I was going to do: public class CustomSerUserType extends DirtyCheckableUserType { protected SerA ser=F.g(SerA.class); public Class<Object> returnedClass() { return Object.class; } public int[] sqlTypes() { return new int[] {Types.BLOB}; } public Object nullSafeGet(ResultSet resultSet,String[] names,Object owner) throws HibernateException,SQLException { if() } public void nullSafeSet(PreparedStatement preparedStatement,Object value,int index) throws HibernateException,SQLException { BlobType.nullSafeSet(preparedStatement,ser.ser(value),index); } } Unfortunetly, the BlobType.nullSafeSet method requires the session. So how does one define a UserType that gets access to a servlet requests session? EDIT: There is a discussion of the issue here and it doesn't appear there is a solution: Best way to implement a Hibernate UserType after deprecations?

    Read the article

  • How to upload images to appengine from gwt

    - by user356083
    Related question I am having similar problems to what that guy had in his. My upload server returns aredirect Specifically, I am not sure what FormPanel.SubmitCompleteEvent.getResults() returns. Sometimes, I get html of an img: <img style="cursor: -moz-zoom-in;" alt="http://<myapp>.appspot.com/servePic?blob-key=abcdef" src="http://<myapp>.appspot.com/servePic?blob-abcdef" height="1" width="1"> Sometimes I get the image data in bytes. Behavior varies on I dunno what. I get the first in development, and the second in production. Does anyone know anything about this?

    Read the article

  • How to save bytes to an image and access it from Bottle

    - by Graham Smith
    I'm working on an API wrapper for Snapchat using Python and Bottle, but in order to return the file (retrieved by the Python script) I have to save the bytes (returned by Snapchat) to a .jpg file. I'm not quite sure how I will do this and still be able to access the file so that it can be returned. Here's what I have so far, but it returns a 404. @route('/image') def image(): username = request.query.username token = request.query.auth_token img_id = request.query.id return get_blob(username, token, img_id) def get_blob(usr, token, img_id): # Form URL and download encrypted "blob" blob_url = "https://feelinsonice.appspot.com/ph/blob?id={}".format(img_id) blob_url += "&username=" + usr + "&timestamp=" + str(timestamp()) + "&req_token=" + req_token(token) enc_blob = requests.get(blob_url).content # Save decrypted image FileUpload.save('/images/' + img_id + '.jpg') img = open('images/' + img_id + '.jpg', 'wb') img.write(decrypt(enc_blob)) img.close() return static_file(img_id + '.jpg', root='/images/')

    Read the article

  • Determine calling executable in Python

    - by Brian Rosner
    I am trying to find the best way of re-invoking a Python script within itself. Currently it is working like http://github.com/benoitc/gunicorn/blob/master/gunicorn/arbiter.py#L285. The START_CTX is created at http://github.com/benoitc/gunicorn/blob/master/gunicorn/arbiter.py#L82-86. The code is relying on sys.argv[0] as the "caller". However, this fails in cases where it is invoked with: python script.py ... This case does work: python ./script.py ... because the code uses os.chdir before running os.execlp. I did notice os.environ["_"], but I am not sure how reliable that would be. Another possible case is to check if sys.argv[0] is not on PATH and is not executable and use sys.executable when calling os.execlp. Any thoughts on a better approach solving this issue?

    Read the article

  • Custom CallOut not displayed correctly in ios6?

    - by balu
    As i want to implement the custom call out in the mkmapview i am using these classes CalloutMapAnnotationView.h and CalloutMapAnnotationView.m I have extracted these classes from the following links https://github.com/asalom/Custom-Map-Annotation-Callouts/blob/master/Classes/CalloutMapAnnotationView.h https://github.com/asalom/Custom-Map-Annotation-Callouts/blob/master/Classes/CalloutMapAnnotationView.m These work fine in ios5 but in ios6 when i am clicking on the call out the map view is moving and call out is not showing correctly as shown in the below figures while i was zooming also its not coming correctly i have tried several ways to get rid out of this problem by checking the version of os and tried to change the some of the methods in the classes but of not use. After implementing these in ios5 map view coming like this In Ios6 This one not coming properly as like in ios5. for example

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >