Search Results

Search found 37989 results on 1520 pages for 'software as a service'.

Page 34/1520 | < Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >

  • Removing a device in "removed" state from Linux software RAID array

    - by Sahasranaman MS
    My workstation has two disks(/dev/sd[ab]), both with similar partitioning. /dev/sdb failed, and cat /proc/mdstat stopped showing the second sdb partition. I ran mdadm --fail and mdadm --remove for all partitions from the failed disk on the arrays that use them, although all such commands failed with mdadm: set device faulty failed for /dev/sdb2: No such device mdadm: hot remove failed for /dev/sdb2: No such device or address Then I hot swapped the failed disk, partitioned the new disk and added the partitions to the respective arrays. All arrays got rebuilt properly except one, because in /dev/md2, the failed disk doesn't seem to have been removed from the array properly. Because of this, the new partition keeps getting added as a spare to the partition, and its status remains degraded. Here's what mdadm --detail /dev/md2 shows: [root@ldmohanr ~]# mdadm --detail /dev/md2 /dev/md2: Version : 1.1 Creation Time : Tue Dec 27 22:55:14 2011 Raid Level : raid1 Array Size : 52427708 (50.00 GiB 53.69 GB) Used Dev Size : 52427708 (50.00 GiB 53.69 GB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Intent Bitmap : Internal Update Time : Fri Nov 23 14:59:56 2012 State : active, degraded Active Devices : 1 Working Devices : 2 Failed Devices : 0 Spare Devices : 1 Name : ldmohanr.net:2 (local to host ldmohanr.net) UUID : 4483f95d:e485207a:b43c9af2:c37c6df1 Events : 5912611 Number Major Minor RaidDevice State 0 8 2 0 active sync /dev/sda2 1 0 0 1 removed 2 8 18 - spare /dev/sdb2 To remove a disk, mdadm needs a device filename, which was /dev/sdb2 originally, but that no longer refers to device number 1. I need help with removing device number 1 with 'removed' status and making /dev/sdb2 active.

    Read the article

  • Software raid 0 with six disks performance

    - by user134880
    I have some problems with disk performance. I have 6 x WD 500Gb RE4 disks. Each disk gives 135Mb/sec throughput. All measurements are made with hdparm with options "-tT" (I know that it is just synthetic test, but I need some start point to make measurements). I have controller with Sil3124 x 4 ports PCI Express 1x So... RAID0 on controller with 2 disks gives 200Mb/s - ok, pcie limit. RAID0 on motherboard with 2 disks gives 270Mb/s - niceeee :) RAID0 on contorller with 4 disks gives 200Mb/s - ok, pcie limit. RAID0 on controller with 4 disks + 1 disks on motherboard = 340Mb/s ... :( RAID0 on controller with 4 disks + 2 disks on motherboard = 300Mb/s .... why? Any ideas? Maybe need more cpu power? Now there is Pentium D Dual core 2.8Ghz, 4Gb RAM. It is dedicated box for storage.. no other activity.

    Read the article

  • Software to hold battery at 50% charge level

    - by Mehper C. Palavuzlar
    My Sony Vaio laptop has a built-in program called "Sony Battery Care" which provides a functionality to hold battery at certain levels of charge while operating. For example, I've set it to 50% and the battery is always kept at that level while on AC power. This mode prevents battery degradation and is more effective as I usually use my laptop with AC power. I'm looking for a similar program to use with other laptops with Windows OS; preferably a free one.

    Read the article

  • Assembling Software RAID in Live CD for data recovery

    - by Maletor
    I need help recovering some data that's on my RAID which is on a LVM on my server running Ubuntu. What happened was I deleted the logical volume that controlled my swap space which was on a partition on drives sda2, sdb2, sdc2, and sdd2 in RAID1. This foobared my whole system for one reason or another. Booting leave me with grub rescue and an error saying that it is an unknown filesystem. When I boot to a live cd I can see my RAID arrays and I can even start them up. However, it doesn't appear to mount them anywhere so I can't see the data. I am in the live cd now and I have done sudo apt-get install mdadm lvm2 so it should be mounting them correctly. I just can't see why it wouldn't. Please any help is appreciated here. Here is some output. By the way, there are 3 RAIDs, 1) /boot 100mb RAID1, 2) swap 10gb RAID1, 3) root 990GB RAID5 ubuntu@ubuntu:~$ df -h Filesystem Size Used Avail Use% Mounted on aufs 124M 101M 18M 86% / none 2.0G 324K 2.0G 1% /dev /dev/sde1 2.0G 826M 1.2G 42% /cdrom /dev/loop0 667M 667M 0 100% /rofs none 2.0G 164K 2.0G 1% /dev/shm tmpfs 2.0G 28K 2.0G 1% /tmp none 2.0G 92K 2.0G 1% /var/run none 2.0G 0 2.0G 0% /var/lock none 2.0G 0 2.0G 0% /lib/init/rw /dev/md1 91M 73M 15M 84% /media/5ac3dbf1-a6c5-409c-96ae-edc6e27992c7 ubuntu@ubuntu:~$ cat /etc/fstab aufs / aufs rw 0 0 tmpfs /tmp tmpfs nosuid,nodev 0 0 /dev/sda2 swap swap defaults 0 0 /dev/sdb2 swap swap defaults 0 0 /dev/sdc2 swap swap defaults 0 0 /dev/sdd2 swap swap defaults 0 0

    Read the article

  • Virtualization software for windows 7

    - by user23950
    I've been using Shadow User and Deep freeze for windows xp. But I don't think its compatible with windows 7. Do you know of any application that can achieve the same thing? Please don't tell me to use Returnil virtual system because I've had a really bad experience with it when I'm using xp. The system just restarted over and over. And I have just reformatted by that time.

    Read the article

  • Node.js Adventure - Storage Services and Service Runtime

    - by Shaun
    When I described on how to host a Node.js application on Windows Azure, one of questions might be raised about how to consume the vary Windows Azure services, such as the storage, service bus, access control, etc.. Interact with windows azure services is available in Node.js through the Windows Azure Node.js SDK, which is a module available in NPM. In this post I would like to describe on how to use Windows Azure Storage (a.k.a. WAS) as well as the service runtime.   Consume Windows Azure Storage Let’s firstly have a look on how to consume WAS through Node.js. As we know in the previous post we can host Node.js application on Windows Azure Web Site (a.k.a. WAWS) as well as Windows Azure Cloud Service (a.k.a. WACS). In theory, WAWS is also built on top of WACS worker roles with some more features. Hence in this post I will only demonstrate for hosting in WACS worker role. The Node.js code can be used when consuming WAS when hosted on WAWS. But since there’s no roles in WAWS, the code for consuming service runtime mentioned in the next section cannot be used for WAWS node application. We can use the solution that I created in my last post. Alternatively we can create a new windows azure project in Visual Studio with a worker role, add the “node.exe” and “index.js” and install “express” and “node-sqlserver” modules, make all files as “Copy always”. In order to use windows azure services we need to have Windows Azure Node.js SDK, as knows as a module named “azure” which can be installed through NPM. Once we downloaded and installed, we need to include them in our worker role project and make them as “Copy always”. You can use my “Copy all always” tool mentioned in my last post to update the currently worker role project file. You can also find the source code of this tool here. The source code of Windows Azure SDK for Node.js can be found in its GitHub page. It contains two parts. One is a CLI tool which provides a cross platform command line package for Mac and Linux to manage WAWS and Windows Azure Virtual Machines (a.k.a. WAVM). The other is a library for managing and consuming vary windows azure services includes tables, blobs, queues, service bus and the service runtime. I will not cover all of them but will only demonstrate on how to use tables and service runtime information in this post. You can find the full document of this SDK here. Back to Visual Studio and open the “index.js”, let’s continue our application from the last post, which was working against Windows Azure SQL Database (a.k.a. WASD). The code should looks like this. 1: var express = require("express"); 2: var sql = require("node-sqlserver"); 3:  4: var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd={PASSWORD};Encrypt=yes;Connection Timeout=30;"; 5: var port = 80; 6:  7: var app = express(); 8:  9: app.configure(function () { 10: app.use(express.bodyParser()); 11: }); 12:  13: app.get("/", function (req, res) { 14: sql.open(connectionString, function (err, conn) { 15: if (err) { 16: console.log(err); 17: res.send(500, "Cannot open connection."); 18: } 19: else { 20: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 21: if (err) { 22: console.log(err); 23: res.send(500, "Cannot retrieve records."); 24: } 25: else { 26: res.json(results); 27: } 28: }); 29: } 30: }); 31: }); 32:  33: app.get("/text/:key/:culture", function (req, res) { 34: sql.open(connectionString, function (err, conn) { 35: if (err) { 36: console.log(err); 37: res.send(500, "Cannot open connection."); 38: } 39: else { 40: var key = req.params.key; 41: var culture = req.params.culture; 42: var command = "SELECT * FROM [Resource] WHERE [Key] = '" + key + "' AND [Culture] = '" + culture + "'"; 43: conn.queryRaw(command, function (err, results) { 44: if (err) { 45: console.log(err); 46: res.send(500, "Cannot retrieve records."); 47: } 48: else { 49: res.json(results); 50: } 51: }); 52: } 53: }); 54: }); 55:  56: app.get("/sproc/:key/:culture", function (req, res) { 57: sql.open(connectionString, function (err, conn) { 58: if (err) { 59: console.log(err); 60: res.send(500, "Cannot open connection."); 61: } 62: else { 63: var key = req.params.key; 64: var culture = req.params.culture; 65: var command = "EXEC GetItem '" + key + "', '" + culture + "'"; 66: conn.queryRaw(command, function (err, results) { 67: if (err) { 68: console.log(err); 69: res.send(500, "Cannot retrieve records."); 70: } 71: else { 72: res.json(results); 73: } 74: }); 75: } 76: }); 77: }); 78:  79: app.post("/new", function (req, res) { 80: var key = req.body.key; 81: var culture = req.body.culture; 82: var val = req.body.val; 83:  84: sql.open(connectionString, function (err, conn) { 85: if (err) { 86: console.log(err); 87: res.send(500, "Cannot open connection."); 88: } 89: else { 90: var command = "INSERT INTO [Resource] VALUES ('" + key + "', '" + culture + "', N'" + val + "')"; 91: conn.queryRaw(command, function (err, results) { 92: if (err) { 93: console.log(err); 94: res.send(500, "Cannot retrieve records."); 95: } 96: else { 97: res.send(200, "Inserted Successful"); 98: } 99: }); 100: } 101: }); 102: }); 103:  104: app.listen(port); Now let’s create a new function, copy the records from WASD to table service. 1. Delete the table named “resource”. 2. Create a new table named “resource”. These 2 steps ensures that we have an empty table. 3. Load all records from the “resource” table in WASD. 4. For each records loaded from WASD, insert them into the table one by one. 5. Prompt to user when finished. In order to use table service we need the storage account and key, which can be found from the developer portal. Just select the storage account and click the Manage Keys button. Then create two local variants in our Node.js application for the storage account name and key. Since we need to use WAS we need to import the azure module. Also I created another variant stored the table name. In order to work with table service I need to create the storage client for table service. This is very similar as the Windows Azure SDK for .NET. As the code below I created a new variant named “client” and use “createTableService”, specified my storage account name and key. 1: var azure = require("azure"); 2: var storageAccountName = "synctile"; 3: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 4: var tableName = "resource"; 5: var client = azure.createTableService(storageAccountName, storageAccountKey); Now create a new function for URL “/was/init” so that we can trigger it through browser. Then in this function we will firstly load all records from WASD. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: } 18: } 19: }); 20: } 21: }); 22: }); When we succeed loaded all records we can start to transform them into table service. First I need to recreate the table in table service. This can be done by deleting and creating the table through table client I had just created previously. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: } 27: }); 28: }); 29: } 30: } 31: }); 32: } 33: }); 34: }); As you can see, the azure SDK provide its methods in callback pattern. In fact, almost all modules in Node.js use the callback pattern. For example, when I deleted a table I invoked “deleteTable” method, provided the name of the table and a callback function which will be performed when the table had been deleted or failed. Underlying, the azure module will perform the table deletion operation in POSIX async threads pool asynchronously. And once it’s done the callback function will be performed. This is the reason we need to nest the table creation code inside the deletion function. If we perform the table creation code after the deletion code then they will be invoked in parallel. Next, for each records in WASD I created an entity and then insert into the table service. Finally I send the response to the browser. Can you find a bug in the code below? I will describe it later in this post. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: for (var i = 0; i < results.rows.length; i++) { 27: var entity = { 28: "PartitionKey": results.rows[i][1], 29: "RowKey": results.rows[i][0], 30: "Value": results.rows[i][2] 31: }; 32: client.insertEntity(tableName, entity, function (error) { 33: if (error) { 34: error["target"] = "insertEntity"; 35: res.send(500, error); 36: } 37: else { 38: console.log("entity inserted"); 39: } 40: }); 41: } 42: // send the 43: console.log("all done"); 44: res.send(200, "All done!"); 45: } 46: }); 47: }); 48: } 49: } 50: }); 51: } 52: }); 53: }); Now we can publish it to the cloud and have a try. But normally we’d better test it at the local emulator first. In Node.js SDK there are three build-in properties which provides the account name, key and host address for local storage emulator. We can use them to initialize our table service client. We also need to change the SQL connection string to let it use my local database. The code will be changed as below. 1: // windows azure sql database 2: //var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd=eszqu94XZY;Encrypt=yes;Connection Timeout=30;"; 3: // sql server 4: var connectionString = "Driver={SQL Server Native Client 11.0};Server={.};Database={Caspar};Trusted_Connection={Yes};"; 5:  6: var azure = require("azure"); 7: var storageAccountName = "synctile"; 8: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 9: var tableName = "resource"; 10: // windows azure storage 11: //var client = azure.createTableService(storageAccountName, storageAccountKey); 12: // local storage emulator 13: var client = azure.createTableService(azure.ServiceClient.DEVSTORE_STORAGE_ACCOUNT, azure.ServiceClient.DEVSTORE_STORAGE_ACCESS_KEY, azure.ServiceClient.DEVSTORE_TABLE_HOST); Now let’s run the application and navigate to “localhost:12345/was/init” as I hosted it on port 12345. We can find it transformed the data from my local database to local table service. Everything looks fine. But there is a bug in my code. If we have a look on the Node.js command window we will find that it sent response before all records had been inserted, which is not what I expected. The reason is that, as I mentioned before, Node.js perform all IO operations in non-blocking model. When we inserted the records we executed the table service insert method in parallel, and the operation of sending response was also executed in parallel, even though I wrote it at the end of my logic. The correct logic should be, when all entities had been copied to table service with no error, then I will send response to the browser, otherwise I should send error message to the browser. To do so I need to import another module named “async”, which helps us to coordinate our asynchronous code. Install the module and import it at the beginning of the code. Then we can use its “forEach” method for the asynchronous code of inserting table entities. The first argument of “forEach” is the array that will be performed. The second argument is the operation for each items in the array. And the third argument will be invoked then all items had been performed or any errors occurred. Here we can send our response to browser. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: async.forEach(results.rows, 26: // transform the records 27: function (row, callback) { 28: var entity = { 29: "PartitionKey": row[1], 30: "RowKey": row[0], 31: "Value": row[2] 32: }; 33: client.insertEntity(tableName, entity, function (error) { 34: if (error) { 35: callback(error); 36: } 37: else { 38: console.log("entity inserted."); 39: callback(null); 40: } 41: }); 42: }, 43: // send reponse 44: function (error) { 45: if (error) { 46: error["target"] = "insertEntity"; 47: res.send(500, error); 48: } 49: else { 50: console.log("all done"); 51: res.send(200, "All done!"); 52: } 53: } 54: ); 55: } 56: }); 57: }); 58: } 59: } 60: }); 61: } 62: }); 63: }); Run it locally and now we can find the response was sent after all entities had been inserted. Query entities against table service is simple as well. Just use the “queryEntity” method from the table service client and providing the partition key and row key. We can also provide a complex query criteria as well, for example the code here. In the code below I queried an entity by the partition key and row key, and return the proper localization value in response. 1: app.get("/was/:key/:culture", function (req, res) { 2: var key = req.params.key; 3: var culture = req.params.culture; 4: client.queryEntity(tableName, culture, key, function (error, entity) { 5: if (error) { 6: res.send(500, error); 7: } 8: else { 9: res.json(entity); 10: } 11: }); 12: }); And then tested it on local emulator. Finally if we want to publish this application to the cloud we should change the database connection string and storage account. For more information about how to consume blob and queue service, as well as the service bus please refer to the MSDN page.   Consume Service Runtime As I mentioned above, before we published our application to the cloud we need to change the connection string and account information in our code. But if you had played with WACS you should have known that the service runtime provides the ability to retrieve configuration settings, endpoints and local resource information at runtime. Which means we can have these values defined in CSCFG and CSDEF files and then the runtime should be able to retrieve the proper values. For example we can add some role settings though the property window of the role, specify the connection string and storage account for cloud and local. And the can also use the endpoint which defined in role environment to our Node.js application. In Node.js SDK we can get an object from “azure.RoleEnvironment”, which provides the functionalities to retrieve the configuration settings and endpoints, etc.. In the code below I defined the connection string variants and then use the SDK to retrieve and initialize the table client. 1: var connectionString = ""; 2: var storageAccountName = ""; 3: var storageAccountKey = ""; 4: var tableName = ""; 5: var client; 6:  7: azure.RoleEnvironment.getConfigurationSettings(function (error, settings) { 8: if (error) { 9: console.log("ERROR: getConfigurationSettings"); 10: console.log(JSON.stringify(error)); 11: } 12: else { 13: console.log(JSON.stringify(settings)); 14: connectionString = settings["SqlConnectionString"]; 15: storageAccountName = settings["StorageAccountName"]; 16: storageAccountKey = settings["StorageAccountKey"]; 17: tableName = settings["TableName"]; 18:  19: console.log("connectionString = %s", connectionString); 20: console.log("storageAccountName = %s", storageAccountName); 21: console.log("storageAccountKey = %s", storageAccountKey); 22: console.log("tableName = %s", tableName); 23:  24: client = azure.createTableService(storageAccountName, storageAccountKey); 25: } 26: }); In this way we don’t need to amend the code for the configurations between local and cloud environment since the service runtime will take care of it. At the end of the code we will listen the application on the port retrieved from SDK as well. 1: azure.RoleEnvironment.getCurrentRoleInstance(function (error, instance) { 2: if (error) { 3: console.log("ERROR: getCurrentRoleInstance"); 4: console.log(JSON.stringify(error)); 5: } 6: else { 7: console.log(JSON.stringify(instance)); 8: if (instance["endpoints"] && instance["endpoints"]["nodejs"]) { 9: var endpoint = instance["endpoints"]["nodejs"]; 10: app.listen(endpoint["port"]); 11: } 12: else { 13: app.listen(8080); 14: } 15: } 16: }); But if we tested the application right now we will find that it cannot retrieve any values from service runtime. This is because by default, the entry point of this role was defined to the worker role class. In windows azure environment the service runtime will open a named pipeline to the entry point instance, so that it can connect to the runtime and retrieve values. But in this case, since the entry point was worker role and the Node.js was opened inside the role, the named pipeline was established between our worker role class and service runtime, so our Node.js application cannot use it. To fix this problem we need to open the CSDEF file under the azure project, add a new element named Runtime. Then add an element named EntryPoint which specify the Node.js command line. So that the Node.js application will have the connection to service runtime, then it’s able to read the configurations. Start the Node.js at local emulator we can find it retrieved the connections, storage account for local. And if we publish our application to azure then it works with WASD and storage service through the configurations for cloud.   Summary In this post I demonstrated how to use Windows Azure SDK for Node.js to interact with storage service, especially the table service. I also demonstrated on how to use WACS service runtime, how to retrieve the configuration settings and the endpoint information. And in order to make the service runtime available to my Node.js application I need to create an entry point element in CSDEF file and set “node.exe” as the entry point. I used five posts to introduce and demonstrate on how to run a Node.js application on Windows platform, how to use Windows Azure Web Site and Windows Azure Cloud Service worker role to host our Node.js application. I also described how to work with other services provided by Windows Azure platform through Windows Azure SDK for Node.js. Node.js is a very new and young network application platform. But since it’s very simple and easy to learn and deploy, as well as, it utilizes single thread non-blocking IO model, Node.js became more and more popular on web application and web service development especially for those IO sensitive projects. And as Node.js is very good at scaling-out, it’s more useful on cloud computing platform. Use Node.js on Windows platform is new, too. The modules for SQL database and Windows Azure SDK are still under development and enhancement. It doesn’t support SQL parameter in “node-sqlserver”. It does support using storage connection string to create the storage client in “azure”. But Microsoft is working on make them easier to use, working on add more features and functionalities.   PS, you can download the source code here. You can download the source code of my “Copy all always” tool here.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • How does the PPA fit into the scenario of publishing an application to the Ubuntu Software Center?

    - by Mridang Agarwalla
    I've been going through docs for the past couple of hours but I haven't understood what the PPA is? I have a cross-platform Java application that I'd like to publish to the Ubuntu Software Center. My application is open-source and I'm using Github. Apparently, publishing applications to the store isn't as simple as uploading a deb package - am I right? I need to create an account on Launchpad and put all my code there. I don't intend to move from Git to Bzr merely for the sake of publishing to the app store but luckily, one is able to set up source-code mirroring from Github to Launchpad. Since my application is still very premature, it'll have updates fairly often. When I build my application on my machine, do I simply go my Ubuntu App Developer page and upload the new DEB package or do they build my application from source? What exactly is the PPA for? I don't think I'll need too many of the Launchpad features so I'd like to stick to Github if possible. (Publishing for Ubuntu really isn't trivial. I can see why there are so many developers out there who haven't published their applications to the Ubuntu Software Center. Publishing an Android applications has been the easiest so far.)

    Read the article

  • Differences between software testing processes and techniques?

    - by Aptos
    I get confused between these terms. For examples, should Unit testing be listed as a software testing process or technique? I think unit testing is a software testing technique. And how about Test driven development? Can you give me some examples for software testing processes and techniques? In my opinion, software testing process is a part of the software development life cycle. For example, if we use V-Model, the software testing process will be System test, Acceptance test, Integration Test... Thank you.

    Read the article

  • Access to the Technology Software

    - by rituchhibber
    The Technology Program software is available to Oracle partners, free of charge, for demonstration and development purposes according to the terms of the OPN Agreement. Instructions for Ordering Software: Download software via the Oracle Software Delivery Cloud website. Downloads are available in most countries. To determine if downloads are available within your country click on the Download software link. Request a physical shipment of the software by downloading and completing the Development and Demonstration Ordering Document (XLS). Should you have additional questions or need assistance in completing the ordering document, please contact your local Partner Business Center. Incomplete orders will not be processed.

    Read the article

  • How is a "Software Developer" different from a "Software Consultant"? What makes a consultant?

    - by Kumar
    I have seen a lot of people claiming themselves to be a "software consultant". These consultants do what a normal software developer does, write code, estimate tasks, fix bugs and attend meetings etc. The only difference being the financials, consultants end up earning more. Then how is a software developer different from a "consultant"? In addition to the main question, I would like to know how can a software developer become a consultant? Are there any specific guidelines for a consultant? Do they need to amass certifications and write up research papers? Please do not confuse the software consultant with a management consultant. Software consultants I have seen are not managers.

    Read the article

  • Can I use my prepaid phone balance (in pesos) to buy from the Software Centre?

    - by obetus
    Using local network broadband, we can use it to buy games and applications from load balance. Is there any possible ways to use it also in Ubuntu software Center? additional: I'm using mobile broadband for the internet connection,this broadband has a sim card and account number where you can download money from buying a prepaid card worth 100 pesos,300 pesos or 500 pesos, provided by our local network. We use this mobile broadband when there is no wifi connection. There are two kinds of mobile broadband, one is postpaid account and the other is prepaid account. I use prepaid account, this kind of account can load a money for transaction like data plans, from 10 pesos for 30 minutes internet connection or 200 pesos for 5 days internet connection., and this prepaid account can load 5 pesos up to thousands of pesos. Now, if this prepaid mobile broadband can provide money in pesos and has internet connection, I think it can also use it for buying goods or applications or games via internet. i think its only need a software that can detect the sim card number and the money balance for transactions. Sorry for my bad english but I hope you got my point.

    Read the article

  • How to create browser based software?

    - by erkant
    I have recently used some software, which come as a regular setup file, where you install your software, and then when you run it, opens the browser, uses the localhost with some specific port number to connect to the software, and runs it from there. I find it quite useful and interesting. But I even don't know whether this kind of software and programming methodology have a name or not. Therefore, I would like to learn which programming languages, APIs, and frameworks are specifically designed for this purpose? One example to this is Metasploit. You can download its setup file, and install it casually like any other software, then when everything finishes, and you want to use the software. It will open the browser and connect to, http://localhost:3790/ where Metasploit will load and start.

    Read the article

  • Ubuntu Software Centre is gone! How can I get it back?

    - by Mochan
    Right. I am having MAJOR issues. (It would be much appreciated if you would solve them for me! Check my profile for open questions...) Following an answerer... who sorry to say... didn't give a great answer because it was just a graphical way of doing what I had already done - suggested I purge a library. Turns out that library controlled the Advanced Packaging Tool (apt), Ubuntu Software Centre and the repositories. Not a great tool to mess with. While trying to purge the library, it came up with 'Removing MyUnity.. Removing x... Removing x... Removing Ubuntu Software Centre... Removing X... Removing apt..." I quickly recognized and force shut-down my system. But I guess it was too late. So... well, plainly put - how do I get it back?!

    Read the article

  • Using Alarmmanager to start a service at specific time

    - by Javadid
    Hi friends, I have searched a lot of places but couldnt find a clean sequential explanation of how to start a service (or if thats not possible then an activity) at a specific time daily using the AlarmManager?? I want to register several such alarms and triggering them should result in a service to be started. I'll be having a small piece of code in the service which can then execute and i can finish the service for good.... Calendar cal = Calendar.getInstance(); Calendar cur_cal = Calendar.getInstance(); cur_cal.setTimeInMillis(System.currentTimeMillis()); Date date = new Date(cur_cal.get(Calendar.YEAR), cur_cal.get(Calendar.MONTH), cur_cal.get(Calendar.DATE), 16, 45); cal.setTime(date); Intent intent = new Intent(ProfileList.this, ActivateOnTime.class); intent.putExtra("profile_id", 2); PendingIntent pintent = PendingIntent.getService(ProfileList.this, 0, intent, 0); AlarmManager alarm = (AlarmManager)getSystemService(Context.ALARM_SERVICE); alarm.set(AlarmManager.RTC_WAKEUP, cal.getTimeInMillis(), pintent); System.out.println("The alarm set!!"); i tried this code to activate the alarm at 4.45... but its not firing the service... do i have to keep the process running?? M i doing anything wrong??? One more thing, my service gets perfectly executed in case i use the following code: long firstTime = SystemClock.elapsedRealtime(); alarm.setRepeating(AlarmManager.ELAPSED_REALTIME_WAKEUP, firstTime, 30*1000,pintent);

    Read the article

  • Starting Wicket web application with OSGi HTTP Service

    - by Jaime Soriano
    I'm trying to start a Wicket Application using Felix implementation of OSGi HTTP service, for that I just register the service using WicketServlet with applicationClassName parameter: props.put("applicationClassName", MainApplication.class.getName()); service = (HttpService)context.getService(httpReference); service.registerServlet("/", new WicketServlet(), props, null); I have also tried using Felix Whiteboard implementation and registering the web service as a Servlet one: props.put("alias", "/"); props.put("init.applicationClassName", MainApplication.class.getName()); registration = context.registerService(Servlet.class.getName(), new WicketServlet(), props); In both cases it fails when I deploy it using Pax Runner and Felix (mvn package install pax:run -Dframework=felix -Dprofiles=log,config), the exception seems to be related with the ClassLoader: [Jetty HTTP Service] ERROR org.apache.felix.http.whiteboard - Failed to register servlet org.apache.wicket.WicketRuntimeException: Unable to create application of class es.warp.sample.HTTPLocalGUI.MainApplication .... .... at java.lang.ClassLoader.loadClass(ClassLoader.java:252) at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:320) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:247) .... .... I have tried to export everything in the bundle and it does the same. The strangest thing is that it works perfectly if I deploy it using Equinox (mvn package install pax:run -Dframework=felix -Dprofiles=log,config). It seems to be a visibilty issue, but I don't know how to fix it, am I doing something wrong? Should I try to extend WicketServlet to take control on the instantiation of the application? Or maybe using an application Factory? Any light is welcomed.

    Read the article

  • BizTalk WCF Service

    - by WtFudgE
    Hi, I am getting an error in my event viewer after deploying a wcf service on our server. The Messaging Engine received an error from transport adapter "WCF-BasicHttp" when notifying the adapter with the BatchComplete event. Reason "Value does not fall within the expected range.". The thing is, this service was running before and worked fine. I just modified a schema a bit, undeployed the service and redeployed. When I then talk to this service the event viewer shows me this message. If I deploy the same service locally it works fine. Also if I browse to my service with explorer it shows no errors. Normally when the receive location is wrong or the used user isn't in the isolated biztalk usergroup it gives an error here, but this isn't the case. Anyone have any idea what I should do? My problem is pretty urgent.. I googled my errormessage but without much success. Thanks yall

    Read the article

  • Problems with WCF endpoints hosted from Windows Service

    - by Dilip
    I have a managed Windows Service that hosts a couple of WCF endpoints. The service is set to start automatically when the PC is restarted. On reboot I find that this line of code: ServiceHost wcfHost1 = new ServiceHost(typeof(WCFHost1)); in the OnStart() method of the service takes somewhere between 15 - 20 seconds to execute. Actually I have two such statements but the second one executes in a flash. It is the first one that takes so long. Does anyone know what could be causing the bottleneck? Because of this, sometimes the call exceeds 30 seconds and as a result the SCM thinks my service timed out while trying to initialize itself. Now, I know its easy for me to just spin off a thread to do this and return from OnStart() right away but I'd like to know what could cause this delay. This happens only when the service starts up on PC reboot. If the PC is up and running, the service starts & stops in less than a second.

    Read the article

  • My service does not start on Windows 2008 (it works on Windows 2003)

    - by akirekadu
    When we install our product on Windows 2008 SP2, couple of services fail to start. After trying different things, we figured out that these service were able to start when "Log on as" is set to "Local system account". This service does need to run as a specific user because it requires access to secure resources. The service did run just fine under this special user account under Windows 2003. I am thinking the problem is related to UAC (user access control). Under interactive mode one can elevate permission by answering the security dialog box. How to do the same for a service? How to configure the service so it runs with necessary permissions? Thanks! Couple of notes I would like to add. One solution I tried is to execute the installer as administrator. The service does get installed. However, it creates another problem. An embedded 3rd party product and its files get installed with admin only rights. So we do need to run the installer as logged in user.

    Read the article

  • how to use Remote Service ?

    - by LEE YONGGUN
    Hi im trying to use Remote Service btween two simple application, But its not easy to me. So any advice you have will help me. here`s my case. I made one app which is playing Music in service, There are two components. one is Activity controlling service by using three buttons, play,pause, stop. and it is working fine. and another one is just simple Activity which also has four buttons bind,play,stop,unbind. when i click bind, it`s confirmed by Toast msg, but when i click play button,it occurs error. i want to control first activitys Music playing service in second Activity. So im trying to use remote service. i made same .aidl file in each app project. In aidl file, i defined methods "playing","stoping" and i implement those methods in Music service class, implementation is simply use intent and startService & stopService. In DDMS there is "java.lang.SecurityException : Binder invocation to an incorrect interface" thats the case what im doing. So please tell me what`s the problem. any advice could help me. thanks Gun.

    Read the article

  • changing asp .net web service namespace when already in productive state

    - by eloQ
    by some unfortunate events we published a web service with the tempuri-namespace a couple of months ago and no one did notice (not in our company, not the companies connecting to the web service) even though it's live since that time and lots of companies (~25 i guess) already access this web service. now i'm thinking of correcting that mistake and have the namespace fixed to a proper value. the only problem is: as soon as we would do it, all the programs and services connected to this web service would stop working. i can't really allow that to happen. is there any way to fix the namespace for future purpose or to have the web service operate under two namespaces as one web service? any tip at all what i could do to get rid of the tempuri-namespace and have it fixed without needing to synchronize the change with all the external companies? i'm all out of ideas, so any help would be much appreciated! thanks! I hope this question is alright here, first time user, found this through using search engines on my research and found tempuri.org related posts, just not the right one..

    Read the article

  • android use local service to refresh map UI

    - by urobo
    I don't need a strict code related answer I just need somebody to tell me what I am missing. My application has to retrieve from a web service (xmlrpc) the positions of some users I know and update their position on a MapView. So I decided to use a Service and an Activity extending MapActivity to show results. I thought about two solutions: I ) start the service and make it ask every minute for these positions and send them to the activity as a bundle via intent. (This didn't work out well, since once shown I couldn't find a method to let the activity continue refresh itself until she stop receiving intents+data from the service) II ) Incorporate a thread within the activity which starts the service via context.startService(...) every minute. And the MapUI refresh itself once the service send back an intent and stop itself. (Maybe I will fall in the same problem category as before I haven't tryied yet). I am also giving directions (via maps.google ws) in this way I'd like to refresh only users positions on the map and save the route. What Am I missing do you have any suggestions? related to activities/services internal mechanics, don't know launch modes, use broadcast receivers or intent filters? Thanks in advance

    Read the article

< Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >