Search Results

Search found 2523 results on 101 pages for 'communication'.

Page 34/101 | < Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >

  • Cisco: Site-to-site VPN with cisco 878 and ASA weirdness

    - by cpf
    I currently have 2 sites, both connected to each other through 2 firewalls / routers in a site-to-site VPN. Pinging from server to server (Using 2mb/2mb SDSL) through that VPN obviously works, however, at one site, we have another internet connection (7m/400k ADSL), and only the link between the two sites should be on the other connection. All pc's should go over the other connection for internet, just communication between servers & Communication between pc's and the server at the other side should go through there too. What is configured at the moment is the server is using the SDSL directly as default gateway. Since it's not intended to surf anything it is a safe config. PC's are configured on the ADSL as default gateway. Now I wanted to route through everything that uses the range used on the other site, it should be sent from the ADSL modem to the SDSL modem, which has the VPN connection. I figured I could use OSPF to do so, however, OSPF doesn't seem to "detect" the range of the external site. Also (due to bad ip subnetting thanks to the other administrator), the ip used internally as the server on the other site also exists on the internet (causing a lot of confusion), so rdp-ing from our server to the server of the other site works (somehow), but tracerouting on the SDSL router (which should actually, in my opinion, go over the VPN) actually goes all over the internet. My question(s): Why doesn't the SDSL router ping the external ip through VPN, but the server does? Why can't I route from the ADSL router to the SDSL over VPN? I would seriously appreciate some help, since I can't figure out why it does it like this.

    Read the article

  • How to collect the performance data of a server during an unreachable/down period using Nagios?

    - by gsc-frank
    Some time services and host stop responding due to a poor server performance. I mean, if for some reason (could be lot of concurrency services access, a expensive backup execution on the server or whatever that consume tons of server resources) a server performance is very degraded, that could lead that the server isn't capable to establish any "normal network communication" (without trigger whatever standards timeouts defined for such communication). Knowing host's performance data (cpu, memory, ...) in case of available during that period (host is not down and despite of its performance degradation still allow plugins collect performance data) could be very useful for sysadmin to try to determine what cause the problem, or at least, if the host performance was good and don't interfered at all in the host/service down. This problem could be solved using remote active (NRPE) or remote passive (NSCA) if such remote solutions could store (buffered) perf data to be send to central Nagios server when host performance or network outage allow it. I read the doc of both solutions and can't find any reference to such buffer mechanism neither what happened in case that NSCA can't reach Nagios server. Any idea of how solve this lack of info? so useful for forensic analysis. EDIT: My questions isn about which tools I can use to debug perf problems or gather perf data to analysis, but is about how collect (using Nagios) host perf data even during a network outage for its posterior analysis (kind of forensic analysis). The idea is integrate such data to Nagios graphers like pnp4nagios and NagiosGrapther. I know that I could install tools like Cacti in each of my host, and have a kind of performance data collection redundancy, but I really want avoid that and try to solve all perf analysis requirements with one tools: Nagios

    Read the article

  • GRE Tunnel over IPsec with Loopback

    - by Alek
    I'm having a really hard time trying to estabilish a VPN connection using a GRE over IPsec tunnel. The problem is that it involves some sort of "loopback" connection which I don't understand -- let alone be able to configure --, and the only help I could find is related to configuring Cisco routers. My network is composed of a router and a single host running Debian Linux. My task is to create a GRE tunnel over an IPsec infrastructure, which is particularly intended to route multicast traffic between my network, which I am allowed to configure, and a remote network, for which I only bear a form containing some setup information (IP addresses and phase information for IPsec). For now it suffices to estabilish a communication between this single host and the remote network, but in the future it will be desirable for the traffic to be routed to other machines on my network. As I said this GRE tunnel involves a "loopback" connection which I have no idea of how to configure. From my previous understanding, a loopback connection is simply a local pseudo-device used mostly for testing purposes, but in this context it might be something more specific that I do not have the knowledge of. I have managed to properly estabilish the IPsec communication using racoon and ipsec-tools, and I believe I'm familiar with the creation of tunnels and addition of addresses to interfaces using ip, so the focus is on the GRE step. The worst part is that the remote peers do not respond to ping requests and the debugging of the general setup is very difficult due to the encrypted nature of the traffic. There are two pairs of IP addresses involved: one pair for the GRE tunnel peer-to-peer connection and one pair for the "loopback" part. There is also an IP range involved, which is supposed to be the final IP addresses for the hosts inside the VPN. My question is: how (or if) can this setup be done? Do I need some special software or another daemon, or does the Linux kernel handle every aspect of the GRE/IPsec tunneling? Please inform me if any extra information could be useful. Any help is greatly appreciated.

    Read the article

  • NATing IPv4 while routing IPv6

    - by Hugo
    I've the following setup: client(s) <---> (eth0) router (eth1) <---> wan I have a static IPv4 address and a /48 IPv6 address block. I need to connect all the clients to (wan). Each client will have it's own public IPv6. Meanwhile, I need to NAT those same clients over to (wan). Everything IPv4-related and the NAT are working fine. The IPv6 communication to/from (eth0)<-(clients) works fine, as does the IPv6 communication from (eth1)<-(wan) works fine. To provide IPv6 to all my clients, I've thought of too choices: Having the router as a gateway, which different IP on each interface. This sounds like I need to tell my ISP to route the entire block through that single IP, so it's not really an option. Transparently pass IPv6 packets to/from eth0<-eth1, so all clients can communicate with the upstream gateway (I would actually have a switch here if it weren't for the need to remain IPv4 compatible). So, since I've opted for the second choice, I'm in doubt: How can I pass all IPv6 traffic from eth0 to eth1 transparently? What I need is a level 3 bridge, but linux's bridgeutils create a level 2 bridge (which would bridge ipv4 as well, and I can't have that). This is a DD-WRT device, but it's pretty much an embeded linux, so most suggestions that would work on linux are welcome. Thanks.

    Read the article

  • How to troubleshoot connectivity when curl gets an *empty response*

    - by chad
    I want to know how to proceed in troubleshooting why a curl request to a webserver doesn't work. I'm not looking for help that would be dependent upon my environment, I just want to know how to collect information about exactly what part of the communication is failing, port numbers, etc. chad-integration:~ # curl -v 111.222.159.30 * About to connect() to 111.222.159.30 port 80 (#0) * Trying 111.222.159.30... connected * Connected to 111.222.159.30 (111.222.159.30) port 80 (#0) > GET / HTTP/1.1 > User-Agent: curl/7.19.0 (x86_64-suse-linux-gnu) libcurl/7.19.0 OpenSSL/0.9.8h zlib/1.2.3 libidn/1.10 > Host: 111.222.159.30 > Accept: */* > * Empty reply from server * Connection #0 to host 111.222.159.30 left intact curl: (52) Empty reply from server * Closing connection #0 So, I understand that an empty response means that curl didn't get any response from the server. No problem, that's precisely what I'm trying to figure out. But what more specific info can I derive from cURL here? It was able to successfully "connect", so doesn't that involve some bidirectional communication? If so, then why does the response not come also? Note, I've verified my service is up and returning responses. Note, I'm a bit green at this level of networking, so feel free to provide some general orientation material.

    Read the article

  • GRE Tunnel over IPsec with Loopback

    - by Alek
    Hello, I'm having a really hard time trying to estabilish a VPN connection using a GRE over IPsec tunnel. The problem is that it involves some sort of "loopback" connection which I don't understand -- let alone be able to configure --, and the only help I could find is related to configuring Cisco routers. My network is composed of a router and a single host running Debian Linux. My task is to create a GRE tunnel over an IPsec infrastructure, which is particularly intended to route multicast traffic between my network, which I am allowed to configure, and a remote network, for which I only bear a form containing some setup information (IP addresses and phase information for IPsec). For now it suffices to estabilish a communication between this single host and the remote network, but in the future it will be desirable for the traffic to be routed to other machines on my network. As I said this GRE tunnel involves a "loopback" connection which I have no idea of how to configure. From my previous understanding, a loopback connection is simply a local pseudo-device used mostly for testing purposes, but in this context it might be something more specific that I do not have the knowledge of. I have managed to properly estabilish the IPsec communication using racoon and ipsec-tools, and I believe I'm familiar with the creation of tunnels and addition of addresses to interfaces using ip, so the focus is on the GRE step. The worst part is that the remote peers do not respond to ping requests and the debugging of the general setup is very difficult due to the encrypted nature of the traffic. There are two pairs of IP addresses involved: one pair for the GRE tunnel peer-to-peer connection and one pair for the "loopback" part. There is also an IP range involved, which is supposed to be the final IP addresses for the hosts inside the VPN. My question is: how (or if) can this setup be done? Do I need some special software or another daemon, or does the Linux kernel handle every aspect of the GRE/IPsec tunneling? Please inform me if any extra information could be useful. Any help is greatly appreciated.

    Read the article

  • VMWare converter performance

    - by bellocarico
    Hello, I have a question about my test lab. It's more to understand the concept more than apply this into production: I have an ESXi with few VMs linux/windows configured and I'd like to use VMWare converter to create backups. To speedup the process I decided to create a Windows VM on the same ESXi host where I've installed Windows 7 and VMWare Converter. The Host has a gigabit card but it's currently connected to a 100Mb FD port. Windows 7 sees a 1gb card connected. When I do the backup using VMWare converter I specify the host IP as source and destination, so I thought the copy could be faster then use my laptop across the network. Well, to cut a long sotry short: I get dreadful performance (4Mb/sec). I'm a buit confused on this because despite the fact that the host is running 100Mb communication between VMs and hosts shouldn't (correct me if I'm wrong) have any limitation instead. I did tweak windows 7 to optimise network performane but I got just a little improvement. i still need 4 hours to back up a 50Gb (thin) VM. Additionally I wanted to ask: Would jumbo frame help in this? I know that jumbo frame have to be supported end to end, and the network switch where the host is currently connected doesn't support this, but I was wondering: 1) Does ESXi host support jumbo frames at all? 2) Can I enable it somehow? 3) If I do so, I guess bulk transfert between VMs and host would improve, but would this affect the communication going through the real switch as this doesn't do jumbo? Thanks for reading

    Read the article

  • Multimaster Keepalived Configuration (Virtual IP with Load Balancing)

    - by Rad Akefirad
    Here are requirements: 1. High Availability 2. Load Balancing First configuration 1. Two linux servers have been configured with one static IP for each: 10.17.243.11, 10.17.243.12 2. Keepalived has been installed and configured with one VRRP instance to provide one virtual IP (10.17.243.10 as VIP, 10.17.243.11 as master and 10.17.243.12 as backup). 3. Everything works fine. The VIP is assigned to the master server (10.17.243.11) as long as it is up and running. As soon as it goes down, the VIP will be assigned to the backup server (10.17.243.12). 4. The problem here is all communication goes to the master server. Second configuration 1. I found active-active configuration for Keepalived which is possible by defining more than one VRRP instance. So that both server have two IPs (real 10.17.243.11 and virtual 10.17.243.10 for server #1 and real 10.17.243.12 and virtual 10.17.243.20 for server #2. 2. Everything works fine. we have two VIPs which are accessible (HA). But all communication coming to each IP still goes to one single machine (either server #1 or #2 depending on the IP). However I found some tricks on the DNS to overcome this limitation. But it's not acceptable in our case. Question: Is there any way to have one virtual IP which is assigned to both servers? By that I mean both servers are handling some part of workload (like the thing we do in web server load balancing)? By using either keepalived or some other tools? Thanks in advance.

    Read the article

  • Isolate clients on same subnet?

    - by stefan.at.wpf
    Given n (e.g. 200) clients in a /24 subnet and the following network structure: client 1 \ . \ . switch -- firewall . / client n / (in words: all clients connected to one switch and the switch connected to the firewall) Now by default, e.g. client 1 and client n can communicate directly using the switch, without any packets ever arriving the firewall. Therefore none of those packets could be filtered. However I would like to filter the packets between the clients, therefore I want to disallow any direct communication between the clients. I know this is possible using vlans, but then - according to my understanding - I would have to put all clients in their own network. However I don't even have that much IP addresses: I have about 200 clients, only a /24 subnet and all clients shall have public ip addresses, therefore I can't just create a private network for each of them (well, maybe using some NAT, but I'd like to avoid that). So, is there any way to tell the switch: Forward all packets to the firewall, don't allow direct communication between clients? Thanks for any hint!

    Read the article

  • Clustered MSDTC

    - by niel
    Hi I'm setting up a SQL cluster (SQL 2008), Windows 2008 R2. I enable the network access on local dtc and then create a DTC resource in my cluster . the problem is that when i start up the resource it does nto pull through my settings to enable network access. the log shows this: MSDTC started with the following settings: Security Configuration (OFF = 0 and ON = 1): Allow Remote Administrator = 0, Network Clients = 0, Trasaction Manager Communication: Allow Inbound Transactions = 0, Allow Outbound Transactions = 0, Transaction Internet Protocol (TIP) = 0, Enable XA Transactions = 0, Enable SNA LU 6.2 Transactions = 1, MSDTC Communications Security = Mutual Authentication Required, Account = NT AUTHORITY\NetworkService, Firewall Exclusion Detected = 0 Transaction Bridge Installed = 0 Filtering Duplicate Events = 1 where when i restart the local dtc service it says this: Security Configuration (OFF = 0 and ON = 1): Allow Remote Administrator = 0, Network Clients = 1, Trasaction Manager Communication: Allow Inbound Transactions = 1, Allow Outbound Transactions = 1, Transaction Internet Protocol (TIP) = 0, Enable XA Transactions = 1, Enable SNA LU 6.2 Transactions = 1, MSDTC Communications Security = No Authentication Required, Account = NT AUTHORITY\NetworkService, Firewall Exclusion Detected = 0 Transaction Bridge Installed = 0 Filtering Duplicate Events = 1 settings on both nodes in teh cluster is the same. I have reinstalled and restarted to many times to mention. Any ideas ?

    Read the article

  • Expresscard Not Detected in PCI-E Adapter

    - by maxpower47
    I'm trying to put an expresscard TV tuner (Avermedia HC82) into my HTPC using this expresscard to PCI-E adapter. I've verified that the tuner works fine in my laptop. The motherboard is a Biostar TF7050-M2. When I install it and turn it on, the light on the back of the adapter comes on fine (there are two indicator lights on the back to show if it is using PCI-E or USB communication, USB communication goes through a USB cable connected between the card and a header on the motherboard) showing that it is working in PCI-E mode. However, the device is never detected in Windows 7 Professional x64. The auto detect never happens, it doesn't show up in the device manager, and I can have it rescan for new hardware and nothing is found. I tested the whole setup (tuner + adapter) in another PC (also using Win 7 Pro x64) and it worked fine. I also tried: Plugging the adapter in to the PCI-E x16 slot on the motherboard (I verified first that the x16 slot worked by installing a video card in it) Booting into safe mode and rescanning Updating the chipset drivers Installing the tuner drivers first Using a different USB cable, plugged in to one of the known good ports on the back of the board Trying it without the USB cable plugged in Removing the other PCI cards that were installed on the board Looking through the BIOS for any setting that might be disabling it somehow to no avail. I'm at a loss for what else to try. I really don't want to RMA it (the shipping back to newegg will be almost as much as it cost to buy in the first place. Any ideas?

    Read the article

  • Help - since adding an elastic load balancer to my EC2 web application I cannot connect with the MySQL database (not in AWS)

    - by undefined
    I have a web application that uses an EC2 instance to receive uploaded images, resize and store on S3 and update my MySQL database with the image record. This database is hosted outside Amazon Web Services and so obviously involves communication between the EC2 instance and the database. Images are posted to the upload server from a Flash client which receives the IP address of the upload server when it is loaded and so sends images to 1.12.23.34/resize_script.php This has worked great .. until i started to try and include a load balancer. Since the ELBs do not use an IP address but a DNS address I am now passing this to Flash. Now when I upload images I get the following response from the server - Could not connect to MySQL: Lost connection to MySQL server at 'reading initial communication packet', system error: 111 What might be causing the lost connection to MySQL server. Is there any additional steps I need to take to allow my upload servers to be load balanced? I have set the host property of my MySQL privileges for this user to % any pointers greatly appreciated thanks.

    Read the article

  • Azure Grid Computing - Worker Roles as HPC Compute Nodes

    - by JoshReuben
    Overview ·        With HPC 2008 R2 SP1 You can add Azure worker roles as compute nodes in a local Windows HPC Server cluster. ·        The subscription for Windows Azure like any other Azure Service - charged for the time that the role instances are available, as well as for the compute and storage services that are used on the nodes. ·        Win-Win ? - Azure charges the computer hour cost (according to vm size) amortized over a month – so you save on purchasing compute node hardware. Microsoft wins because you need to purchase HPC to have a local head node for managing this compute cluster grid distributed in the cloud. ·        Blob storage is used to hold input & output files of each job. I can see how Parametric Sweep HPC jobs can be supported (where the same job is run multiple times on each node against different input units), but not MPI.NET (where different HPC Job instances function as coordinated agents and conduct master-slave inter-process communication), unless Azure is somehow tunneling MPI communication through inter-WorkerRole Azure Queues. ·        this is not the end of the story for Azure Grid Computing. If MS requires you to purchase a local HPC license (and administrate it), what's to stop a 3rd party from doing this and encapsulating exposing HPC WCF Broker Service to you for managing compute nodes? If MS doesn’t  provide head node as a service, someone else will! Process ·        requires creation of a worker node template that specifies a connection to an existing subscription for Windows Azure + an availability policy for the worker nodes. ·        After worker nodes are added to the cluster, you can start them, which provisions the Windows Azure role instances, and then bring them online to run HPC cluster jobs. ·        A Windows Azure worker role instance runs a HPC compatible Azure guest operating system which runs on the VMs that host your service. The guest operating system is updated monthly. You can choose to upgrade the guest OS for your service automatically each time an update is released - All role instances defined by your service will run on the guest operating system version that you specify. see Windows Azure Guest OS Releases and SDK Compatibility Matrix (http://go.microsoft.com/fwlink/?LinkId=190549). ·        use the hpcpack command to upload file packages and install files to run on the worker nodes. see hpcpack (http://go.microsoft.com/fwlink/?LinkID=205514). Requirements ·        assuming you have an azure subscription account and the HPC head node installed and configured. ·        Install HPC Pack 2008 R2 SP 1 -  see Microsoft HPC Pack 2008 R2 Service Pack 1 Release Notes (http://go.microsoft.com/fwlink/?LinkID=202812). ·        Configure the head node to connect to the Internet - connectivity is provided by the connection of the head node to the enterprise network. You may need to configure a proxy client on the head node. Any cluster network topology (1-5) is supported). ·        Configure the firewall - allow outbound TCP traffic on the following ports: 80,       443, 5901, 5902, 7998, 7999 ·        Note: HPC Server  uses Admin Mode (Elevated Privileges) in Windows Azure to give the service administrator of the subscription the necessary privileges to initialize HPC cluster services on the worker nodes. ·        Obtain a Windows Azure subscription certificate - the Windows Azure subscription must be configured with a public subscription (API) certificate -a valid X.509 certificate with a key size of at least 2048 bits. Generate a self-sign certificate & upload a .cer file to the Windows Azure Portal Account page > Manage my API Certificates link. see Using the Windows Azure Service Management API (http://go.microsoft.com/fwlink/?LinkId=205526). ·        import the certificate with an associated private key on the HPC cluster head node - into the trusted root store of the local computer account. Obtain Windows Azure Connection Information for HPC Server ·        required for each worker node template ·        copy from azure portal - Get from: navigation pane > Hosted Services > Storage Accounts & CDN ·        Subscription ID - a 32-char hex string in the form xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx. In Properties pane. ·        Subscription certificate thumbprint - a 40-char hex string (you need to remove spaces). In Management Certificates > Properties pane. ·        Service name - the value of <ServiceName> configured in the public URL of the service (http://<ServiceName>.cloudapp.net). In Hosted Services > Properties pane. ·        Blob Storage account name - the value of <StorageAccountName> configured in the public URL of the account (http://<StorageAccountName>.blob.core.windows.net). In Storage Accounts > Properties pane. Import the Azure Subscription Certificate on the HPC Head Node ·        enable the services for Windows HPC Server  to authenticate properly with the Windows Azure subscription. ·        use the Certificates MMC snap-in to import the certificate to the Trusted Root Certification Authorities store of the local computer account. The certificate must be in PFX format (.pfx or .p12 file) with a private key that is protected by a password. ·        see Certificates (http://go.microsoft.com/fwlink/?LinkId=163918). ·        To open the certificates snapin: Run > mmc. File > Add/Remove Snap-in > certificates > Computer account > Local Computer ·        To import the certificate via wizard - Certificates > Trusted Root Certification Authorities > Certificates > All Tasks > Import ·        After the certificate is imported, it appears in the details pane in the Certificates snap-in. You can open the certificate to check its status. Configure a Proxy Client on the HPC Head Node ·        the following Windows HPC Server services must be able to communicate over the Internet (through the firewall) with the services for Windows Azure: HPCManagement, HPCScheduler, HPCBrokerWorker. ·        Create a Windows Azure Worker Node Template ·        Edit HPC node templates in HPC Node Template Editor. ·        Specify: 1) Windows Azure subscription connection info (unique service name) for adding a set of worker nodes to the cluster + 2)worker node availability policy – rules for deploying / removing worker role instances in Windows Azure o   HPC Cluster Manager > Configuration > Navigation Pane > Node Templates > Actions pane > New à Create Node Template Wizard or Edit à Node Template Editor o   Choose Node Template Type page - Windows Azure worker node template o   Specify Template Name page – template name & description o   Provide Connection Information page – Azure Subscription ID (text) & Subscription certificate (browse) o   Provide Service Information page - Azure service name + blob storage account name (optionally click Retrieve Connection Information to get list of available from azure – possible LRT). o   Configure Azure Availability Policy page - how Windows Azure worker nodes start / stop (online / offline the worker role instance -  add / remove) – manual / automatic o   for automatic - In the Configure Windows Azure Worker Availability Policy dialog -select days and hours for worker nodes to start / stop. ·        To validate the Windows Azure connection information, on the template's Connection Information tab > Validate connection information. ·        You can upload a file package to the storage account that is specified in the template - eg upload application or service files that will run on the worker nodes. see hpcpack (http://go.microsoft.com/fwlink/?LinkID=205514). Add Azure Worker Nodes to the HPC Cluster ·        Use the Add Node Wizard – specify: 1) the worker node template, 2) The number of worker nodes   (within the quota of role instances in the azure subscription), and 3)           The VM size of the worker nodes : ExtraSmall, Small, Medium, Large, or ExtraLarge.  ·        to add worker nodes of different sizes, must run the Add Node Wizard separately for each size. ·        All worker nodes that are added to the cluster by using a specific worker node template define a set of worker nodes that will be deployed and managed together in Windows Azure when you start the nodes. This includes worker nodes that you add later by using the worker node template and, if you choose, worker nodes of different sizes. You cannot start, stop, or delete individual worker nodes. ·        To add Windows Azure worker nodes o   In HPC Cluster Manager: Node Management > Actions pane > Add Node à Add Node Wizard o   Select Deployment Method page - Add Azure Worker nodes o   Specify New Nodes page - select a worker node template, specify the number and size of the worker nodes ·        After you add worker nodes to the cluster, they are in the Not-Deployed state, and they have a health state of Unapproved. Before you can use the worker nodes to run jobs, you must start them and then bring them online. ·        Worker nodes are numbered consecutively in a naming series that begins with the root name AzureCN – this is non-configurable. Deploying Windows Azure Worker Nodes ·        To deploy the role instances in Windows Azure - start the worker nodes added to the HPC cluster and bring the nodes online so that they are available to run cluster jobs. This can be configured in the HPC Azure Worker Node Template – Azure Availability Policy -  to be automatic or manual. ·        The Start, Stop, and Delete actions take place on the set of worker nodes that are configured by a specific worker node template. You cannot perform one of these actions on a single worker node in a set. You also cannot perform a single action on two sets of worker nodes (specified by two different worker node templates). ·        ·          Starting a set of worker nodes deploys a set of worker role instances in Windows Azure, which can take some time to complete, depending on the number of worker nodes and the performance of Windows Azure. ·        To start worker nodes manually and bring them online o   In HPC Node Management > Navigation Pane > Nodes > List / Heat Map view - select one or more worker nodes. o   Actions pane > Start – in the Start Azure Worker Nodes dialog, select a node template. o   the state of the worker nodes changes from Not Deployed to track the provisioning progress – worker node Details Pane > Provisioning Log tab. o   If there were errors during the provisioning of one or more worker nodes, the state of those nodes is set to Unknown and the node health is set to Unapproved. To determine the reason for the failure, review the provisioning logs for the nodes. o   After a worker node starts successfully, the node state changes to Offline. To bring the nodes online, select the nodes that are in the Offline state > Bring Online. ·        Troubleshooting o   check node template. o   use telnet to test connectivity: telnet <ServiceName>.cloudapp.net 7999 o   check node status - Deployment status information appears in the service account information in the Windows Azure Portal - HPC queries this -  see  node status information for any failed nodes in HPC Node Management. ·        When role instances are deployed, file packages that were previously uploaded to the storage account using the hpcpack command are automatically installed. You can also upload file packages to storage after the worker nodes are started, and then manually install them on the worker nodes. see hpcpack (http://go.microsoft.com/fwlink/?LinkID=205514). ·        to remove a set of role instances in Windows Azure - stop the nodes by using HPC Cluster Manager (apply the Stop action). This deletes the role instances from the service and changes the state of the worker nodes in the HPC cluster to Not Deployed. ·        Each time that you start a set of worker nodes, two proxy role instances (size Small) are configured in Windows Azure to facilitate communication between HPC Cluster Manager and the worker nodes. The proxy role instances are not listed in HPC Cluster Manager after the worker nodes are added. However, the instances appear in the Windows Azure Portal. The proxy role instances incur charges in Windows Azure along with the worker node instances, and they count toward the quota of role instances in the subscription.

    Read the article

  • CodePlex Daily Summary for Sunday, October 21, 2012

    CodePlex Daily Summary for Sunday, October 21, 2012Popular ReleasesBlogEngine.NET: BlogEngine.NET 2.7 RC: Cheap ASP.NET Hosting - $4.95/Month - Click Here!! Click Here for More Info Cheap ASP.NET Hosting - $4.95/Month - Click Here! dot This is a Release Candidate version for BlogEngine.NET 2.7. The most current, stable version of BlogEngine.NET is version 2.6. Find out more about the BlogEngine.NET 2.7 RC here. To get started, be sure to check out our installation documentation. If you are upgrading from a previous version, please take a look at the Upgrading to BlogEngine.NET 2.7 instructions...Pulse: Pulse 0.6.3.0: Fixed a number of bugs that showed up since my update yesterday. Fixes included are for: - Weird issue where the initial "Nature" wallbase.cc search would duplicate itself - After changing a providers settings it wouldn't take affect until you restarted Pulse (removing or adding a provider entirely did take effect though) - Another small issue with the regex for the wallbase.cc wallpapers that I tweaked yesterday, seems good now though.Liberty: v3.4.0.0 Release 20th October 2012: Change Log -Added -Halo 4 support (invincibility, ammo editing) -Reach A warning dialog now shows up when you first attempt to swap a weapon -Fixed -A few minor bugsMCEBuddy 2.x: MCEBuddy 2.3.3: 1. MCEBuddy now supports PIPE (2.2.15 style) and the newer remote TCP communication. This is to solve problems with faulty Ceton network drivers and some issues with older system related to load. When using LOCALHOST, MCEBuddy uses PIPE communication otherwise it uses TCP based communication. 2. UPnP is now disabled by Default since it interferes with some TV Tuner cards (CETON) that represent themselves as Network devices (bad drivers). Also as a security measure to avoid external connection...Orchard Project: Orchard 1.6 RC: RELEASE NOTES This is the Release Candidate version of Orchard 1.6. You should use this version to prepare your current developments to the upcoming final release, and report problems. Please read our release notes for Orchard 1.6 RC: http://docs.orchardproject.net/Documentation/Orchard-1-6-Release-Notes Please do not post questions as reviews. Questions should be posted in the Discussions tab, where they will usually get promptly responded to. If you post a question as a review, you wil...Rawr: Rawr 5.0.1: This is the Downloadable WPF version of Rawr!For web-based version see http://elitistjerks.com/rawr.php You can find the version notes at: http://rawr.codeplex.com/wikipage?title=VersionNotes Rawr Addon (NOT UPDATED YET FOR MOP)We now have a Rawr Official Addon for in-game exporting and importing of character data hosted on Curse. The Addon does not perform calculations like Rawr, it simply shows your exported Rawr data in wow tooltips and lets you export your character to Rawr (including ba...Yahoo! UI Library: YUI Compressor for .Net: Version 2.1.1.0 - Sartha (BugFix): - Revered back the embedding of the 2x assemblies.Visual Studio Team Foundation Server Branching and Merging Guide: v2.1 - Visual Studio 2012: Welcome to the Branching and Merging Guide What is new? The Version Control specific discussions have been moved from the Branching and Merging Guide to the new Advanced Version Control Guide. The Branching and Merging Guide and the Advanced Version Control Guide have been ported to the new document style. See http://blogs.msdn.com/b/willy-peter_schaub/archive/2012/10/17/alm-rangers-raising-the-quality-bar-for-documentation-part-2.aspx for more information. Quality-Bar Details Documentatio...D3 Loot Tracker: 1.5.5: Compatible with 1.05.Write Once, Play Everywhere: MonoGame 3.0 (BETA): This is a beta release of the up coming MonoGame 3.0. It contains an Installer which will install a binary release of MonoGame on windows boxes with the following platforms. Windows, Linux, Android and Windows 8. If you need to build for iOS or Mac you will need to get the source code at this time as the installers for those platforms are not available yet. The installer will also install a bunch of Project templates for Visual Studio 2010 , 2012 and MonoDevleop. For those of you wish...Windawesome: Windawesome v1.4.1 x64: Fixed switching of applications across monitors Changed window flashing API (fix your config files) Added NetworkMonitorWidget (thanks to weiwen) Any issues/recommendations/requests for future versions? This is the 64-bit version of the release. Be sure to use that if you are on a 64-bit Windows. Works with "Required DLLs v3".Restful Objects for .NET: Restful Objects Server 1.0.1: Version 1.0.1 is a bug fix release - fixing bug #55 - a failure to conform to the Restful Objects spec (v.1.0.0) for the Parameters property on an Action Representation. Please note that the easiest way to use Restful Objects for .NET is as NuGet Packages: search the NuGet Public Gallery for 'restfulobjects'. It is only necessary to download the source (from here) if you wish to build and/or modify the framework yourself.Extensions.js: Extensions.js 0.8.3.6 (Release): Extensions.js provides type extensions to facilitate working with javascript objects in a style familiar to C# programmers.PdfReport: PdfReport 1.2: - Added navigation/nested properties support to StronglyTypedList DataSource. - Moved watermark location to the top layer. - Fixed grouping issue in multi column reports. - Fixed a typo, Pervious to Previous! - Added more than 25 samples. you can download them from the "source code" tab: http://pdfreport.codeplex.com/SourceControl/BrowseLatest - Added NuGet Package: http://nuget.org/packages/PdfReport/Merge PDF: MergePDF 1.0 Released: MergePDF 1.0 Released40FINGERS DotNetNuke StyleHelper: 40FINGERS StyleHelper Skin Object 02.06.04: Version 02.06.04:Bug Fix SuperUser Detection Passing IfRole="SuperUsers" did not detect Host users This has been corrected now and the code has been rewritten. New Attribute ContentFalse This is the content that gets injected when the conditions Version 02.06.03:Changed IfQs behavior: IfQs also to test if a query String Parameter exists You can now pass a QS paramter without value Where IfQS="ProductId:122" would test for a QS parameter ProductId with value 122 IfQS="ProductId" allows you t...Display attachments (list view) SP 2010: Display attachments (in list view) 1.0.0: Version 1.0.0: Display attachments for list item in list view Async loading attachments using library jQuery 1.8.2 Use sharepoint webservice (/vtibin/Lists.asmx) Simple in use Simple installation Localized: English RussianCODE Framework: 4.0.21017.0: See change log in the Documentation section for details.Magelia WebStore Open-source Ecommerce software: Magelia WebStore 2.1: Add support for .net 4.0 to Magelia.Webstore.Client and StarterSite version 2.1.254.3 Scheduler Import & Export feature (for Professional and Entreprise Editions) UTC datetime and timezone support .net 4.5 and Visual Studio 2012 migration client magelia global refactoring release of a nugget package to help developers speed up development http://nuget.org/packages/Magelia.Webstore.Client optimization of the data update mechanism (a.k.a. "burst") Performance improvment of the d...VFPX: FoxcodePlus: FoxcodePlus - Visual Studio like extensions to Visual FoxPro IntelliSense.New Projectsa new super fast css3 selector engine: kquery - A Super Fast And Compatible Css3 Selector Engine.AcfunWP: Acfun for Windows Phone??????MIT??????????,???????????Windows Phone?????????????????????。AdRotator v2: A highly customizable ad rotator component for Windows Phone and Windows 8 platforms, to be used with Silverlight, XNA and Monogame.BackUpCostaRicaProject: SumaryClickOnceTest: projekatCloudClipboardSync: Ha egy felhasználó eszközei közötti kommunikációról van szó, akkor a Dropbox és hasonló fájlszinkronizációs szolgáltatások felhasználhatók, mint korlátozott átvCodeplexTest: Enter two numbers to get the sum of them.cosuagwusumofnumbers: cosuagwu's sum of numberDaf Yomi WP7 App: Daf Yomi is a Windows Phone 7.5 application that let you listen to current Daf Yomi content from www.daf-yomi.com.Doctor Reg: Doctor Regfelixsumofnumbers: task1: getting two numbers from a user and calculating the sumFoxOS: La Volpe nel tuo osGanagro Lite: Windows forms application for handling grass-fed bull raising operations. Uses .net 3.5 and sql server (2005 or later). Written in c#, localized in spanish.GSISW8: ??a Windows Store efa?µ??? ? ?p??a pa???e? ?as??? St???e?a ??a ?? F?s??? ???s?pa ?a? F?s??? ???s?pa ?p?t?de?µat?e?, µ?s? t?? ???s?? t?? a????t?? Web Service t??JavaScript Calculator: ajogjoohon: Ua ua auaKRATOS: Kratos, the personification of power.Logical Disk Indicator: Logical Disk Indicator is a tool to monitor logical disk activity in notification area. Visual Basic.NET and .Net 2.0Media Organiser: This project aims to provide a tool that allows you not only to overview your media collection but also reorganize it following specific rules you can definePolymorphGame: A University project created in XNA integrating farseer physics engine. Contains some bugs and the code is not of the cleanest. Comments and critics welcome!qp: ????????? ??? ??????? ???? ?????? ????????? ? ???????????RAIP (Resonance Assignment by Integer Linear Programming): In progress...SanguoshaCardsCounter: SanguoshaCardsCounter??????????????????????????????。 ?????Microsoft Visual Studio 2012????C#????.NET Framework 4.5??。SimpleCalculatorProject: A simple calculator that adds two integers and displays the resultSJKP.PdfConversion: SharePoint 2010 Service Application framework, containing the infrastructure for easy OCR processing of PDF files in lists. A OCR component is not included.SoftwareTestingConcepts: Website gives information about Software Testing Concepts.SpeakToMe: SpeakToMe is a natural language processor that works by tokenizing the input based on known concepts and then matches the token structure against a set of rulesTododoo: This is my small hobby project - the simpliest todo-list possible.TokenUtil: TokenUtil is a command line program for requesting a token from a Security Token Service.VS2012 MSHA file builder: visual studio 2012 help view msha file creation

    Read the article

  • CodePlex Daily Summary for Sunday, June 09, 2013

    CodePlex Daily Summary for Sunday, June 09, 2013Popular ReleasesZXMAK2: Version 2.7.5.5: - several fixes for joystick scanVG-Ripper & PG-Ripper: PG-Ripper 1.4.13: changes NEW: Added Support for "ImageJumbo.com" links FIXED: Ripping of Threads with multiple pagesCKEditor™ Provider for DotNetNuke®: CKEditor Provider 2.00.05: Whats New Updated to CKEditor 4.1.1 Added Auto Save Function (autosave plugin) {Delay can be defined in the Config - Default is 25} New Setting to set the Default Link Type (Editor Config Tab) Added CodeMirror Plugin Settings to the Editor Config Tab Added WordCount Plugin Settings to the Editor Config Tab Added Maximum Upload File Size Info to the Upload Dialog Added Check for Maximum Upload Size on Quick Upload and File Browser Upload changes File-Browser: Fixed an Issue with S...Property Framework: Property Framework (binaries) Latest: Latest stable 6/8/2013xFunc: xFunc (2.2.0.0): Added: user functions;PHP Vulnerability Hunter: PHP Vulnerability Hunter 1.4.0.20 Alpha: PHP Vulnerability Hunter 1.4.0.20 AlphaXomega Framework: Xomega.Framework 1.4: Adding support for Visual Studio 2012 and .Net framework 4.5. Minor bug fixes and enhancements.sb0t v.5: sb0t 5.14: Stability fix in script engine. Avatar.exists property fixed in scripting. cb0t custom font protocol re-added and updated to support new Ares.ASP.NET MVC Forum: MVCForum v1.3.5: This is a bug release version, with a couple of small usability features and UI changes. All the small amount of bugs reported in v1.3 have been fixed, no upgrade needed just overwrite the files and everything should just work.Json.NET: Json.NET 5.0 Release 6: New feature - Added serialized/deserialized JSON to verbose tracing New feature - Added support for using type name handling with ISerializable content Fix - Fixed not using default serializer settings with primitive values and JToken.ToObject Fix - Fixed error writing BigIntegers with JsonWriter.WriteToken Fix - Fixed serializing and deserializing flag enums with EnumMember attribute Fix - Fixed error deserializing interfaces with a valid type converter Fix - Fixed error deser...Christoc's DotNetNuke Module Development Template: DotNetNuke 7 Project Templates V2.3 for VS2012: V2.3 - Release Date 6/5/2013 Items addressed in this 2.3 release Fixed bad namespace for BusinessController in one of the C# templates. Updated documentation in all templates. Setting up your DotNetNuke Module Development Environment Installing Christoc's DotNetNuke Module Development Templates Customizing the latest DotNetNuke Module Development Project TemplatesPulse: Pulse 0.6.7.0: A number of small bug fixes to stabilize the previous Beta. Sorry about the never ending "New Version" bug!QlikView Extension - Animated Scatter Chart: Animated Scatter Chart - v1.0: Version 1.0 including Source Code qar File Example QlikView application Tested With: Browser Firefox 20 (x64) Google Chrome 27 (x64) Internet Explorer 9 QlikView QlikView Desktop 11 - SR2 (x64) QlikView Desktop 11.2 - SR1 (x64) QlikView Ajax Client 11.2 - SR2 (based on x64)BarbaTunnel: BarbaTunnel 7.2: Warning: HTTP Tunnel is not compatible with version 6.x and prior, HTTP packet format has been changed. Check Version History for more information about this release.SuperWebSocket, a .NET WebSocket Server: SuperWebSocket 0.8: This release includes these changes below: Upgrade SuperSocket to 1.5.3 which is much more stable Added handshake request validating api (WebSocketServer.ValidateHandshake(TWebSocketSession session, string origin)) Fixed a bug that the m_Filters in the SubCommandBase can be null if the command's method LoadSubCommandFilters(IEnumerable<SubCommandFilterAttribute> globalFilters) is not invoked Fixed the compatibility issue on Origin getting in the different version protocols Marked ISub...BlackJumboDog: Ver5.9.0: 2013.06.04 Ver5.9.0 (1) ?????????????????????????????????($Remote.ini Tmp.ini) (2) ThreadBaseTest?? (3) ????POP3??????SMTP???????????????? (4) Web???????、?????????URL??????????????? (5) Ftp???????、LIST?????????????? (6) ?????????????????????Media Companion: Media Companion MC3.569b: New* Movies - Autoscrape/Batch Rescrape extra fanart and or extra thumbs. * Movies - Alternative editor can add manually actors. * TV - Batch Rescraper, AutoScrape extrafanart, if option enabled. Fixed* Movies - Slow performance switching to movie tab by adding option 'Disable "Not Matching Rename Pattern"' to Movie Preferences - General. * Movies - Fixed only actors with images were scraped and added to nfo * Movies - Fixed filter reset if selected tab was above Home Movies. * Updated Medi...Nearforums - ASP.NET MVC forum engine: Nearforums v9.0: Version 9.0 of Nearforums with great new features for users and developers: SQL Azure support Admin UI for Forum Categories Avoid html validation for certain roles Improve profile picture moderation and support Warn, suspend, and ban users Web administration of site settings Extensions support Visit the Roadmap for more details. Webdeploy package sha1 checksum: 9.0.0.0: e687ee0438cd2b1df1d3e95ecb9d66e7c538293b Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.93: Added -esc:BOOL switch (CodeSettings.AlwaysEscapeNonAscii property) to always force non-ASCII character (ch > 0x7f) to be escaped as the JavaScript \uXXXX sequence. This switch should be used if creating a Symbol Map and outputting the result to the a text encoding other than UTF-8 or UTF-16 (ASCII, for instance). Fixed a bug where a complex comma operation is the operand of a return statement, and it was looking at the wrong variable for possible optimization of = to just .Document.Editor: 2013.22: What's new for Document.Editor 2013.22: Improved Bullet List support Improved Number List support Minor Bug Fix's, improvements and speed upsNew ProjectsAcer 1420p Leaky Handle Fix: Fixes leaking handles on the Acer 1420p laptop given out at PDC09.Akismet Spam Filter for Community Server 2008.5: Akismet Spam Filter for Community Server 2008.5 Atom Timer: Atom Timer is a thread based time that allows schedules to be created using events.BRICK CMS: These Days,I am tired to listen that: .NET is going down and JAVA/Ruby/Python will replace it. yes,they have been growing up while .NET's going down. do or die?DataTestFramework: ???????&ORM??????Date/Time Interval: The Date Time Interval allows for different types of interval to be created. The class will enumerate the defined interval support LINQ statements. More informaDimas.Net: .net infrastructure to create a web/service server from scratch. it includes n-tier , log , policy injection , mapper , MVC best prictice and etc.Gannet: Gannet is an operating system for us (the target developers) to learn about how an Operating System is put together and what components are needed.Image Resize For Android: Android????????????LightBlog: LightBlog?????Node.js,Express??,Mongodb???markdown??????????Memory: Live artistic interaction using KinectNestedHtmlWriter: This is a helper class library for writing simple HTML document, by using statement in C#.Operation Sneak Peek: Windows Phone game that includes stealth+logic gameplay. Player has to look for hidden letters to discover a secret word and use it to defuse a bomb.Orchard DarkStripes Theme: Orchard theme based on Octopress DarkStripesPath copy from context menu: ????????????????????????????Phantomas: mouhouhahahahaSE1: NO SUMMARY ! SiteLinks DNN Module: The SiteLinks DNN module is a module for displaying a list of existing links on your DNN website. This module works in similarly to the DNN "Links" skin object.sql to object maping: SqlString CodeMapTCP/IP Communication Framework: TCP/IP Communication Framework (TCP/IP CF) is a library that wraps the .NET Socket class and defines several classes for developing communication applications..UTorrentClient Api: UTorrentClient Api is an extensible set of classes that use WebUI to manipulate µTorrent remotely.Visual Studio Spell Checker: A Visual Studio editor extension that checks the spelling of comments, strings, and plain text as you type. Supports configuration and various languages.zjsru_xyw: this is a test projectZTrans: ztrans is language for embedded software development???: test?????????: ??????????? ????:VS2012+SQL2012 ????:ASP.NET(.NET 4.0) ????:MVC3+EF5 ????: ?????,??,?? ???????,??,?? ????????,??,?? ????DIV+CSS?? Jquery??1.6.4 ??Ajax??????

    Read the article

  • Building Enterprise Smartphone App &ndash; Part 4: Application Development Considerations

    - by Tim Murphy
    This is the final part in a series of posts based on a talk I gave recently at the Chicago Information Technology Architects Group.  Feel free to leave feedback. Application Development Considerations Now we get to the actual building of your solutions.  What are the skills and resources that will be needed in order to develop a smartphone application in the enterprise? Language Knowledge One of the first things you need to consider when you are deciding which platform language do you either have the most in house skill base or can you easily acquire.  If you already have developers who know Java or C# you may want to use either Android or Windows Phone.  You should also take into consideration the market availability of developers.  If your key developer leaves how easy is it to find a knowledgeable replacement? A second consideration when it comes to programming languages is the qualities exposed by the languages of a particular platform.  How well does that development language and its associated frameworks support things like security and access to the features of the smartphone hardware?  This will play into your overall cost of ownership if you have to create this infrastructure on your own. Manage Limited Resources Everything is limited on a smartphone: battery, memory, processing power, network bandwidth.  When developing your applications you will have to keep your footprint as small as possible in every way.  This means not running unnecessary processes in the background that will drain the battery or pulling more data over the airwaves than you have to.  You also want to keep your on device in as compact a format as possible. Mobile Design Patterns There are a number of design patterns that have either come to life because of smartphone development or have been adapted for this use.  The main pattern in the Windows Phone environment is the MVVM (Model-View-View-Model).  This is great for overall application structure and separation of concerns.  The fun part is trying to keep that separation as pure as possible.  Many of the other patterns may or may not have strict definitions, but some that you need to be concerned with are push notification, asynchronous communication and offline data storage. Real estate is limited on smartphones and even tablets. You are also limited in the type of controls that can be represented in the UI. This means rethinking how you modularize your application. Typing is also much harder to do so you want to reduce this as much as possible.  This leads to UI patterns.  While not what we would traditionally think of as design patterns the guidance each platform has for UI design is critical to the success of your application.  If user find the application difficult navigate they will not use it. Development Process Because of the differences in development tools required, test devices and certification and deployment processes your teams will need to learn new way of working together.  This will include the need to integrate service contracts of back-end systems with mobile applications.  You will also want to make sure that you present consistency across different access points to corporate data.  Your web site may have more functionality than your smartphone application, but it should have a consistent core set of functionality.  This all requires greater communication between sub-teams of your developers. Testing Process Testing of smartphone apps has a lot more to do with what happens when you lose connectivity or if the user navigates away from your application. There are a lot more opportunities for the user or the device to perform disruptive acts.  This should be your main testing concentration aside from the main business requirements.  You will need to do things like setting the phone to airplane mode and seeing what the application does in order to weed out any gaps in your handling communication interruptions. Need For Outside Experts Since this is a development area that is new to most companies the need for experts is a lot greater. Whether these are consultants, vendor representatives or just development community forums you will need to establish expert contacts. Nothing is more dangerous for your project timelines than a lack of knowledge.  Make sure you know who to call to avoid lengthy delays in your project because of knowledge gaps. Security Security has to be a major concern for enterprise applications. You aren't dealing with just someone's game standings. You are dealing with a companies intellectual property and competitive advantage. As such you need to start by limiting access to the application itself.  Once the user is in the app you need to ensure that the data is secure at all times.  This includes both local storage and across the wire.  This means if a platform doesn’t natively support encryption for these functions you will need to find alternatives to secure your data.  You also need to keep secret (encryption) keys obfuscated or locked away outside of the application. People can disassemble the code otherwise and break your encryption. Offline Capabilities As we discussed earlier one your biggest concerns is not having connectivity.  Because of this a good portion of your code may be dedicated to handling loss of connection and reconnection situations.  What do you do if you lose the network?  Back up all your transactions and store of any supporting data so that operations can continue off line. In order to support this you will need to determine the available flat file or local data base capabilities of the platform.  Any failed transactions will need to support a retry mechanism whether it is automatic or user initiated.  This also includes your services since they will need to be able to roll back partially completed transactions.  What ever you do, don’t ignore this area when you are designing your system. Deployment Each platform has different deployment capabilities. Some are more suited to enterprise situations than others. Apple's approach is probably the most mature at the moment. Prior to the current generation of smartphone platforms it would have been Windows CE. Windows Phone 7 has the limitation that the app has to be distributed through the same network as public facing applications. You mark them as private which means that they are only accessible by a direct URL. Unfortunately this does not make them undiscoverable (although it is very difficult). This will change with Windows Phone 8 where companies will be able to certify their own applications and distribute them.  Given this Windows Phone applications need to be more diligent with application access in order to keep them restricted to the company's employees. My understanding of the Android deployment schemes is that it is much less standardized then either iOS or Windows Phone. Someone would have to confirm or deny that for me though since I have not yet put the time into researching this platform further. Given my limited exposure to the iOS and Android platforms I have not been able to confirm this, but there are varying degrees of user involvement to install and keep applications updated. At one extreme the user just goes to a website to do the install and in other case they may need to download files and perform steps to install them. Future Bluetooth Today we use Bluetooth for keyboards, mice and headsets.  In the future it could be used to interrogate car computers or manufacturing systems or possibly retail machines by service techs.  This would open smartphones to greater use as a almost a Star Trek Tricorder.  You would get you all your data as well as being able to use it as a universal remote for just about any device or machine. Better corporation controlled deployment At least in the Windows Phone world the upcoming release of Windows Phone 8 will include a private certification and deployment option that is currently not available with Windows Phone 7 (Mango). We currently have to run the apps through the Marketplace certification process and use a targeted distribution method. Platform independent approaches HTML5 and JavaScript with Web Service has become a popular topic lately for not only creating flexible web site, but also creating cross platform mobile applications.  I’m not yet convinced that this lowest common denominator approach is viable in most cases, but it does have it’s place and seems to be growing.  Be sure to keep an eye on it. Summary From my perspective enterprise smartphone applications can offer a great competitive advantage to many companies.  They are not cheap to build and should be approached cautiously.  Understand the factors I have outlined in this series, do you due diligence and see if there is a portion of your business that can benefit from the mobile experience. del.icio.us Tags: Architecture,Smartphones,Windows Phone,iOS,Android

    Read the article

  • How do you read from a file into an array of struct?

    - by Thomas.Winsnes
    I'm currently working on an assignment and this have had me stuck for hours. Can someone please help me point out why this isn't working for me? struct book { char title[25]; char author[50]; char subject[20]; int callNumber; char publisher[250]; char publishDate[11]; char location[20]; char status[11]; char type[12]; int circulationPeriod; int costOfBook; }; void PrintBookList(struct book **bookList) { int i; for(i = 0; i < sizeof(bookList); i++) { struct book newBook = *bookList[i]; printf("%s;%s;%s;%d;%s;%s;%s;%s;%s;%d;%d\n",newBook.title, newBook.author, newBook.subject, newBook.callNumber,newBook.publisher, newBook.publishDate, newBook.location, newBook.status, newBook.type,newBook.circulationPeriod, newBook.costOfBook); } } void GetBookList(struct book** bookList) { FILE* file = fopen("book.txt", "r"); struct book newBook[1024]; int i = 0; while(fscanf(file, "%s;%s;%s;%d;%s;%s;%s;%s;%s;%d;%d", &newBook[i].title, &newBook[i].author, &newBook[i].subject, &newBook[i].callNumber,&newBook[i].publisher, &newBook[i].publishDate, &newBook[i].location, &newBook[i].status, &newBook[i].type,&newBook[i].circulationPeriod, &newBook[i].costOfBook) != EOF) { bookList[i] = &newBook[i]; i++; } /*while(fscanf(file, "%s;%s;%s;%d;%s;%s;%s;%s;%s;%d;%d", &bookList[i].title, &bookList[i].author, &bookList[i].subject, &bookList[i].callNumber, &bookList[i].publisher, &bookList[i].publishDate, &bookList[i].location, &bookList[i].status, &bookList[i].type, &bookList[i].circulationPeriod, &bookList[i].costOfBook) != EOF) { i++; }*/ PrintBookList(bookList); fclose(file); } int main() { struct book *bookList[1024]; GetBookList(bookList); } I get no errors or warnings on compile it should print the content of the file, just like it is in the file. Like this: OperatingSystems Internals and Design principles;William.S;IT;741012759;Upper Saddle River;2009;QA7676063;Available;circulation;3;11200 Communication skills handbook;Summers.J;Accounting;771239216;Milton;2010;BF637C451;Available;circulation;3;7900 Business marketing management:B2B;Hutt.D;Management;741912319;Mason;2010;HF5415131;Available;circulation;3;1053 Patient education rehabilitation;Dreeben.O;Education;745121511;Sudbury;2010;CF5671A98;Available;reference;0;6895 Tomorrow's technology and you;Beekman.G;Science;764102174;Upper Saddle River;2009;QA76B41;Out;reserved;1;7825 Property & security: selected essay;Cathy.S;Law;750131231;Rozelle;2010;D4A3C56;Available;reference;0;20075 Introducing communication theory;Richard.W;IT;714789013;McGraw-Hill;2010;Q360W47;Available;circulation;3;12150 Maths for computing and information technology;Giannasi.F;Mathematics;729890537;Longman;Scientific;1995;QA769M35G;Available;reference;0;13500 Labor economics;George.J;Economics;715784761;McGraw-Hill;2010;HD4901B67;Available;circulation;3;7585 Human physiology:from cells to systems;Sherwood.L;Physiology;707558936;Cengage Learning;2010;QP345S32;Out;circulation;3;11135 bobs;thomas;IT;701000000;UC;1006;QA7548;Available;Circulation;7;5050 but when I run it, it outputs this: OperatingSystems;;;0;;;;;;0;0 Internals;;;0;;;;;;0;0 and;;;0;;;;;;0;0 Design;;;0;;;;;;0;0 principles;William.S;IT;741012759;Upper;41012759;Upper;;0;;;;;;0;0 Saddle;;;0;;;;;;0;0 River;2009;QA7676063;Available;circulation;3;11200;lable;circulation;3;11200;;0;;;;;;0;0 Communication;;;0;;;;;;0;0 Thanks in advance, you're a life saver

    Read the article

  • SQLAuthority News – The Best Quotes of “Who Wrote This?” Contest

    - by pinaldave
    I am a frequent reader of Brent Ozar PLF, it is one of my favorite blogs. A recent post announced a “Who Wrote This?” contest to see if readers could tell their three contributors apart based on some writing samples. Here are my favorite lines from the sample paragraphs, from each of the three “mystery authors.” Topic 1: Working with Bad Managers Mystery Author A – “Working with bad managers means working against my own happiness, and I’ve come to learn that there’s no changing bad managers.” I love this line because, as anyone who has had a bad manager knows, often a lot of self-doubt rises up. We all have to remember that sometimes the problem is out of our control. Mystery Author B – “Mentor your manager just like you would mentor a junior DBA.” Having a bad manager can be extremely depressing, and we often feel out of control. But we all need to remember that our work is a two-way street, and that sometimes we can subtly influence those above us. Mystery Author C – “The trick to working for all bad managers is to remember that they aren’t your parent. Take charge of your career.” We all also need to learn not to play the blame game. Would you rather stay in a place where you are unhappy, or would you rather take charge of your life? I hope most people would pick the latter. Topic 2: Working with Remote Teams Mystery Author A – “Like almost anything else the key is to make sure that everyone on the team has an understanding of how and when communication will occur.” Communication is so important. I cannot over emphasize how much. And this one line captures how I feel and even communicates the idea clearly! Mystery Author B – “The key to remote team success is verifiable trust: feeling confident that invisible team members are doing the right amount of the right thing at the right time.” I think this line not only captures the key aspects of remote work – verifiable work and trust – but there were so many lines that followed that I loved and could not fit here. The whole paragraph is a list for successful remote work. Everyone could benefit from reading it. Mystery Author C – “What seems clear, precise, and specific in one time zone comes across as vague, soupy, and just plain weird in another.” You know what? I just love this description. The author is right – sometimes vague e-mails really do seem soupy and weird! Topic 3: Working with Your Nemesis Mystery Author A – “Every job is temporary, but your reputation stays with you.” Everyone needs to remember this. The workplace is meant to be a professional arena, and many people have the opinion that work is temporary and disposable. No one wants to work with co-worker like that. Mystery Author B – “Unhealthy conflict is going to lead to leaving three week old tuna fish sandwiches in someone’s desk drawer.” Sometimes humor really is the best policy! Mystery Author C – “Oh no, it’s that guy.” This might seem like a weird phrase to choose as my favorite from an entire paragraph. But the whole piece was written in the form of a story of co-workers getting drunk and plotting against a nemesis. It was too funny to overlook, but too long to post here. A must read! Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

  • The “Customer” Experience Revolution is Here

    - by Natalia Rachelson
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} A guest post by Anthony Lye, SVP, Oracle Development The Experience Revolution is here, and we are going to explore and celebrate our new customer experience ventures and strategy in an extraordinary way. In true Oracle fashion, we are hosting an exceptional event, bringing together customer experience advocates, visionaries and practitioners to discover and define Oracle’s Customer Experience vision. The Experience Revolution is best described as today’s era of the empowered consumer. For those of us who work with customers on a daily basis, we know that the modern consumer demands fast, accurate, consistent information across all communication channels. And if they don’t like the services received can easily take to social channels to voice disapproval. For this reason, organizations today operate in an environment where traditional methods of differentiation are less effective and customer experience has become the primary driver of business value. Here’s some food for thought, according to the 2011 Customer Experience Impact (CEI) Report, a full 89 percent of consumers will switch brands for a better customer experience. In short, in today’s era of the empowered consumer, delivering excellent customer experiences is what will, and is, defining the next great brands. At The Experience Revolution, Oracle President Mark Hurd will detail the vision of where customer experience is going and how Oracle will help you get there. He will introduce for the first time Oracle Customer Experience, a cross stack suite of customer experience products that enable organizations to: Engage customers with a consistent, connected and personalized brand experience across all channels and devices Deliver exceptional cross-channel order fulfillment and customer service through web, call centers and social networks Connect and analyze data from all interactions to better personalize experiences and identify hidden opportunities The Experience Revolution will also include an interactive gallery of customer experience interactions, featuring videos, touch screens and near field communication technology that will guide each attendee through an individualized event experience. We hope you will join us for an incredible evening on June 25, from 6:00 – 9:00 p.m. at Gotham Hall in New York City. You can register for The Experience Revolution here. And if you haven’t already joined the conversation on Twitter, please do: #OracleCX, #ExperienceRevolution

    Read the article

  • The dislikes of TDD

    - by andrewstopford
    I enjoy debates about TDD and Brian Harrys blog post is no exception. Brian sounds out what he likes and dislikes about TDD and it's the dislikes I'll focus on. The idea of having unit tests that cover virtually every line of code that I’ve written that I have to refactor every time I refactor my code makes me shudder.  Doing this way makes me take nearly twice as long as it would otherwise take and I don’t feel like I get sufficient benefits from it. Refactoring your tests to match your refactored code sounds like the tests are suffering. Too many hard dependencies with no SOLID concerns are a sure fire reason you would do this. Maybe at the start of a TDD cycle you would need to do this as your design evolves and you remove these dependencies but this should quickly be resolved as you refactor. If you find your self still doing it then stop and look back at your design. Don’t get me wrong, I’m a big fan of unit tests.  I just prefer to write them after the code has stopped shaking a bit.  In fact most of my early testing is “manual”.  Either I write a small UI on top of my service that allows me to plug in values and try it or write some quick API tests that I throw away as soon as I have validated them. The problem with this is that a UI can make assumptions on your code that then just unit test around and very quickly the design becomes bad and you technical debt sweeps in. If you want to blackbox test your code with a UI then do so after your TDD cycles not before. This is probably by biggest issue with a literal TDD interpretation.  TDD says you never write a line of code without a failing test to show you need it.  I find it leads developers down a dangerous path.  Without any help from a methodology, I have met way too many developers in my life that “back into a solution”.  By this, I mean they write something, it mostly works and they discover a new requirement so they tack it on, and another and another and when they are done, they’ve got a monstrosity of special cases each designed to handle one specific scenario.  There’s way more code than there should be and it’s way too complicated to understand. I believe in finding general solutions to problems from which all the special cases naturally derive rather than building a solution of special cases.  In my mind, to do this, you have to start by conceptualizing and coding the framework of the general algorithm.  For me, that’s a relatively monolithic exercise. TDD is an development pratice not a methodology, the danger is that the solution becomes a mass of different things that violate DRY. TDD won't solve these problems, only good communication and practices like pairing will help. Above all else an assumption that TDD replaces a methodology is a mistake, combine it with what ever works for your team\business but only good communication will help. A good naming scheme\structure for folders, files and tests can help you and your team isolate what tests are for what.

    Read the article

  • How to make software development decisions based on facts

    - by Laila
    We love to hear stories about the many and varied ways our customers use the tools that we develop, but in our earnest search for stories and feedback, we'd rather forgotten that some of our keenest users are fellow RedGaters, in the same building. It was almost by chance that we discovered how the SQL Source Control team were using SmartAssembly. As it happens, there is a separate account (here on Simple-Talk) of how SmartAssembly was used to support the Early Access program; by providing answers to specific questions about how the SQL Source Control product was used. But what really got us all grinning was how valuable the SQL Source Control team found the reports that SmartAssembly was quickly and painlessly providing. So gather round, my friends, and I'll tell you the Tale Of The Framework Upgrade . <strange mirage effect to denote a flashback. A subtle background string of music starts playing in minor key> Kevin and his team were undecided. They weren't sure whether they could move their software product from .NET 2 to .NET 3.5 , let alone to .NET 4. You see, they were faced with having to guess what version of .NET was already installed on the average user's machine, which I'm sure you'll agree is no easy task. Upgrading their code to .NET 3.5 might put a barrier to people trying the tool, which was the last thing Kevin wanted: "what if our users have to download X, Y, and Z before being able to open the application?" he asked. That fear of users having to do half an hour of downloads (.followed by at least ten minutes of installation. followed by a five minute restart) meant that Kevin's team couldn't take advantage of WCF (Windows Communication Foundation). This made them sad, because WCF would have allowed them to write their code in a much simpler way, and in hours instead of days (as was the case with .NET 2). Oh sure, they had a gut feeling that this probably wasn't the case, 3.5 had been out for so many years, but they weren't sure. <background music switches to major key> SmartAssembly Feature Usage Reporting gave Kevin and his team exactly what they needed: hard data on their users' systems, both hardware and software. I was there, I saw it happen, and that's not the sort of thing a woman quickly forgets. I'll always remember his last words (before he went to lunch): "You get lots of free information by just checking a box in SmartAssembly" is what he said. For example, they could see how many CPU cores their customers were using, and found out that they should be making use of parallelism to take advantage of available cores. But crucially, (and this is the moral of my tale, dear reader), Kevin saw that 99% of SQL Source Control's users were on .NET 3.5 or above.   So he knew that they could make the switch and that is was safe to do so. With this reassurance, they could use WCF to not only make development easier, but to also give them a really nice way to do inter-process communication between the Source Control and the SQL Compare products. To have done that on .NET 2.0 was certainly possible <knowing chuckle>, but Microsoft have made it a lot easier with WCF. <strange mirage effect to denote end of flashback> So you see, with Feature Usage Reporting, they finally got the hard evidence they needed to safely make the switch to .NET 3.5, knowing it would not inconvenience their users. And that, my friends, is just the sort of thing we like to hear.

    Read the article

  • Why is my machine unable to mount my SMB drives ("CIFS VFS: Error connecting to socket. Aborting operation", return code -115)?

    - by downbeat
    I have a machine running Precise (12.04 x64), and I cannot mount my SMB drives (I have 3, we'll call them public, private and download). It used to work (a week or two ago) and I didn't touch fstab! The machine hosting the shares is a commercial NAS, and I'm not seeing anything that would indicate it's an issue with the NAS. I have an older machine which I updated to Precise at the same time (both fresh installed, not dist-upgrade), so should have a very similar configuration. It is not having any problems. I am not having problems on windows machines/partitions either, only one of my Precise machines. The two machines are using identical entries in fstab and identical /etc/samba/smb.conf files. I don't think I've ever changed smb.conf (has never mattered before). My fstab entries all basically look like this: //10.1.1.111/public /media/public cifs credentials=/home/downbeat/.credentials,iocharset=utf8,uid=downbeat,gid=downbeat,file_mode=0644,dir_mode=0755 0 0 Here's the dmesg output on boot: [ 51.162198] CIFS VFS: Error connecting to socket. Aborting operation [ 51.162369] CIFS VFS: cifs_mount failed w/return code = -115 [ 51.194106] CIFS VFS: Error connecting to socket. Aborting operation [ 51.194250] CIFS VFS: cifs_mount failed w/return code = -115 [ 51.198120] CIFS VFS: Error connecting to socket. Aborting operation [ 51.198243] CIFS VFS: cifs_mount failed w/return code = -115 There are no other errors I see in the dmesg output. Originally when I ran 'testparm -s', the output contained these lines ERROR: lock directory /var/run/samba does not exist ERROR: pid directory /var/run/samba does not exist Here's the samba related programs I have installed: $ dpkg --list|grep -i samba ii libpam-winbind 2:3.6.3-2ubuntu2.3 Samba nameservice and authentication integration plugins ii libwbclient0 2:3.6.3-2ubuntu2.3 Samba winbind client library ii nautilus-share 0.7.3-1ubuntu2 Nautilus extension to share folder using Samba ii python-smbc 1.0.13-0ubuntu1 Python bindings for Samba clients (libsmbclient) ii samba-common 2:3.6.3-2ubuntu2.3 common files used by both the Samba server and client ii samba-common-bin 2:3.6.3-2ubuntu2.3 common files used by both the Samba server and client ii winbind 2:3.6.3-2ubuntu2.3 Samba nameservice integration server $ dpkg --list|grep -i smb ii dmidecode 2.11-4 SMBIOS/DMI table decoder ii libsmbclient 2:3.6.3-2ubuntu2.3 shared library for communication with SMB/CIFS servers ii python-smbc 1.0.13-0ubuntu1 Python bindings for Samba clients (libsmbclient) ii smbclient 2:3.6.3-2ubuntu2.3 command-line SMB/CIFS clients for Unix ii smbfs 2:5.1-1ubuntu1 Common Internet File System utilities - compatibility package $ dpkg --list|grep -i cifs ii cifs-utils 2:5.1-1ubuntu1 Common Internet File System utilities ii libsmbclient 2:3.6.3-2ubuntu2.3 shared library for communication with SMB/CIFS servers ii smbclient 2:3.6.3-2ubuntu2.3 command-line SMB/CIFS clients for Unix I originally noticed that my other machine had "libpam-winbind" and "nautilus-share" installed and the machine with the issue did not. Installing those two packages solved my errors with 'testparm -s', but did not fix my issue. Finally, I tried to purge and reinstall these packages smbclient smbfs cifs-utils samba-common samba-common-bin Still no luck. Again, it used to work; now it doesn't. Very similarly configured machine works (but some packages are out of date on the working machine). The NAS has only one interface/IP address, nmblookup works to find it's IP from it's hostname (from the machine with the issue) and it responds to a ping. Please any help would be great. I've been searching on AskUbuntu, SuperUser, ubuntuforums and plain old search engines for a week now and it's driving me crazy!

    Read the article

  • Inside Red Gate - The Office

    - by Simon Cooper
    The vast majority of Red Gate is on the first and second floors (the second and third floors in US parlance) of an office building in Cambridge Business Park (here we are!). As you can see, the building is split into three sections; the two wings, and the section between them. As well as being organisationally separate, the four divisions are also split up in the office; each division has it's own floor and wing, so everyone in the division is working together in the same area (.NET and DBA on the left, SQL Tools and New Business on the right). The non-divisional parts of the business share wings with the smaller divisions, again keeping each group together. The canteen One of the downsides of divisionalisation is that communication between people in different decisions is greatly reduced. This is where the canteen (aka the SQL Servery) comes in. Occupying most of the central section on the first floor, the canteen provides free cooked lunch every day, and is where everyone in the company gathers for lunch. The idea is to encourage communication between the divisions; having lunch with people in a different division you wouldn't otherwise talk to helps people keep track of what's going on elsewhere in the company. (I'm still amazed at how the canteen staff provide a wide range of superbly cooked food for over 200 people out of a kitchen in which, if you were to swing a cat, it would get severe head injuries.). There's also table tennis and table football tables that anyone can use, provided you can grab them when they're free! Office layout Cubicles are practically unheard of in the UK, and no one, including the CEOs, has separate offices. The entire office is open-plan, as you can see in this youtube video from when we first moved in (although all the empty desks are now full!). Neil & Simon, instead of having dedicated offices, move between the different divisions every few months to keep up to date with what's going on around the company; sitting with a division gives you a much better overall impression of how the division's doing than written status reports from the division heads. There's also the usual plethora of meeting rooms scattered around the place; when we first moved in in 2009 we had a competition to name them all. We've got Afoxalypse A & B, Seagulls A & B, Traffic Jam, Thinking Hats, Camelids A & B, Horses, etc. All the meeting rooms have pictures on the walls corresponding to their theme, which adds a nice bit of individuality to otherwise fairly drab meeting rooms. Generally, any meeting room can be booked by anyone at any time, although some groups have priority in certain rooms (Camelids B is used a lot for UX testing, the Interview Room is used for, well, interviews). And, as you can see from the video, each area has various pictures, post-its, notes, signs, on the walls to try and stop it being a dull office space. Yes, it's still an office, but it's designed to be as interesting and as individual as possible.

    Read the article

  • career in Mobile sw/Application Development [closed]

    - by pramod
    i m planning to do a course on Wireless & mobile computing.The syllabus are given below.Please check & let me know whether its worth to do.How is the job prospects after that.I m a fresher & from electronic Engg.The modules are- *Wireless and Mobile Computing (WiMC) – Modules* C, C++ Programming and Data Structures 100 Hours C Revision C, C++ programming tools on linux(Vi editor, gdb etc.) OOP concepts Programming constructs Functions Access Specifiers Classes and Objects Overloading Inheritance Polymorphism Templates Data Structures in C++ Arrays, stacks, Queues, Linked Lists( Singly, Doubly, Circular) Trees, Threaded trees, AVL Trees Graphs, Sorting (bubble, Quick, Heap , Merge) System Development Methodology 18 Hours Software life cycle and various life cycle models Project Management Software: A Process Various Phases in s/w Development Risk Analysis and Management Software Quality Assurance Introduction to Coding Standards Software Project Management Testing Strategies and Tactics Project Management and Introduction to Risk Management Java Programming 110 Hours Data Types, Operators and Language Constructs Classes and Objects, Inner Classes and Inheritance Inheritance Interface and Package Exceptions Threads Java.lang Java.util Java.awt Java.io Java.applet Java.swing XML, XSL, DTD Java n/w programming Introduction to servlet Mobile and Wireless Technologies 30 Hours Basics of Wireless Technologies Cellular Communication: Single cell systems, multi-cell systems, frequency reuse, analog cellular systems, digital cellular systems GSM standard: Mobile Station, BTS, BSC, MSC, SMS sever, call processing and protocols CDMA standard: spread spectrum technologies, 2.5G and 3G Systems: HSCSD, GPRS, W-CDMA/UMTS,3GPP and international roaming, Multimedia services CDMA based cellular mobile communication systems Wireless Personal Area Networks: Bluetooth, IEEE 802.11a/b/g standards Mobile Handset Device Interfacing: Data Cables, IrDA, Bluetooth, Touch- Screen Interfacing Wireless Security, Telemetry Java Wireless Programming and Applications Development(J2ME) 100 Hours J2ME Architecture The CLDC and the KVM Tools and Development Process Classification of CLDC Target Devices CLDC Collections API CLDC Streams Model MIDlets MIDlet Lifecycle MIDP Programming MIDP Event Architecture High-Level Event Handling Low-Level Event Handling The CLDC Streams Model The CLDC Networking Package The MIDP Implementation Introduction to WAP, WML Script and XHTML Introduction to Multimedia Messaging Services (MMS) Symbian Programming 60 Hours Symbian OS basics Symbian OS services Symbian OS organization GUI approaches ROM building Debugging Hardware abstraction Base porting Symbian OS reference design porting File systems Overview of Symbian OS Development – DevKits, CustKits and SDKs CodeWarrior Tool Application & UI Development Client Server Framework ECOM STDLIB in Symbian iPhone Programming 80 Hours Introducing iPhone core specifications Understanding iPhone input and output Designing web pages for the iPhone Capturing iPhone events Introducing the webkit CSS transforms transitions and animations Using iUI for web apps Using Canvas for web apps Building web apps with Dashcode Writing Dashcode programs Debugging iPhone web pages SDK programming for web developers An introduction to object-oriented programming Introducing the iPhone OS Using Xcode and Interface builder Programming with the SDK Toolkit OS Concepts & Linux Programming 60 Hours Operating System Concepts What is an OS? Processes Scheduling & Synchronization Memory management Virtual Memory and Paging Linux Architecture Programming in Linux Linux Shell Programming Writing Device Drivers Configuring and Building GNU Cross-tool chain Configuring and Compiling Linux Virtual File System Porting Linux on Target Hardware WinCE.NET and Database Technology 80 Hours Execution Process in .NET Environment Language Interoperability Assemblies Need of C# Operators Namespaces & Assemblies Arrays Preprocessors Delegates and Events Boxing and Unboxing Regular Expression Collections Multithreading Programming Memory Management Exceptions Handling Win Forms Working with database ASP .NET Server Controls and client-side scripts ASP .NET Web Server Controls Validation Controls Principles of database management Need of RDBMS etc Client/Server Computing RDBMS Technologies Codd’s Rules Data Models Normalization Techniques ER Diagrams Data Flow Diagrams Database recovery & backup SQL Android Application 80 Hours Introduction of android Why develop for android Android SDK features Creating android activities Fundamental android UI design Intents, adapters, dialogs Android Technique for saving data Data base in Androids Maps, Geocoding, Location based services Toast, using alarms, Instant messaging Using blue tooth Using Telephony Introducing sensor manager Managing network and wi-fi connection Advanced androids development Linux kernel security Implement AIDL Interface. Project 120 Hours

    Read the article

  • Meet Thomas, the Most Innovational person in Oracle Direct EMEA of Q1

    - by Maria Sandu
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Thomas was voted, by his peers,  the most Innovational person in Oracle Direct EMEA of Q1, the first quarter of this fiscal! Thomas, a Business Development Consultant at Oracle Direct’s Applications Team, taught himself how to use and leverage the power of social engagement consistent with Oracle’s Social Media Policy.  From these learning's he provided both his and other applications teams in Dublin with huge amounts of training and has presented his findings to the teams on more than one occasion. It is important to recognise that this isn't just a great idea....it actually works! The results speak for themselves. Thomas is engaging with customers and prospects via their preferred channel of communication and creating a strong personal social brand. We congratulate Thomas for his efforts of raising Social Media to the next level within Business Development Group. He put a lot of work into Social Selling, as one of the first within the BDG and set the example for a new innovative approach on how to sell anno 2013. He deserves to be recognized for this. His contribution to social media has been a great inspiration for all Business Development Consultants or Business Relationship Consultants. He knows what he talks about and has great conversion rates out of his social media campaigns. And he doesn't mind sharing his knowledge with everybody. Great effort in searching for new ways of communication and social selling. Thomas has shown great initiative towards leveraging the social media and networks (twitter, linkedin) to find new business opportunities in a previously way. He has shown great out-of-the-box thinking while addressing new companies and prospects and has shared those experiences and ideas to help his colleagues use the same approach. This included a presentation, informational emails and a general helpful attitude from him. He also shared his success stories from his innovational approach.  Thomas is showing initiative with an innovative and fresh character, truly helping people to try something new  with a focus on selling across channels and working for the CRM team which is focused on selling social. We think the way Thomas positions social, by using social is innovative and inspirational. What better way to tell your clients do social, by engaging with them on a social platform? Going always the extra mile, we believe, that Thomas Brits, is an innovator from the day he walked into Oracle Direct. The way Thomas operates on the work floor by introducing new ideas to find the best opportunities as possible shows he runs the extra mile for coming up with new ideas around how to engage with customers more efficiently for instance via Social Media. Thomas also organises power hours/days for the team. He is the best! /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

< Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >