Search Results

Search found 24268 results on 971 pages for 'customer service'.

Page 158/971 | < Previous Page | 154 155 156 157 158 159 160 161 162 163 164 165  | Next Page >

  • spring-nullpointerexception- cant access autowired annotated service (or dao) in a no-annotations class

    - by user286806
    I have this problem that I cannot fix. From my @Controller, i can easily access my autowired @Service class and play with it no problem. But when I do that from a separate class without annotations, it gives me a NullPointerException. My Controller (works)- @Controller public class UserController { @Autowired UserService userService;... My separate Java class (not working)- public final class UsersManagementUtil { @Autowired UserService userService; or @Autowired UserDao userDao; userService or userDao are always null! Was just trying if any one of them works. My component scan setting has the root level package set for scanning so that should be OK. my servlet context - <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:context="http://www.springframework.org/schema/context" xmlns:tx="http://www.springframework.org/schema/tx" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.0.xsd http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx-3.0.xsd"> <!-- the application context definition for the springapp DispatcherServlet --> <!-- Enable annotation driven controllers, validation etc... --> <context:property-placeholder location="classpath:jdbc.properties" /> <context:component-scan base-package="x" /> <tx:annotation-driven transaction-manager="hibernateTransactionManager" /> <!-- package shortended --> <bean id="messageSource" class="o.s.c.s.ReloadableResourceBundleMessageSource"> <property name="basename" value="/WEB-INF/messages" /> </bean> <bean id="dataSource" class="org.springframework.jdbc.datasource.DriverManagerDataSource"> <property name="driverClassName" value="${database.driver}" /> <property name="url" value="${database.url}" /> <property name="username" value="${database.user}" /> <property name="password" value="${database.password}" /> </bean> <!-- package shortened --> <bean id="viewResolver" class="o.s.w.s.v.InternalResourceViewResolver"> <property name="prefix"> <value>/</value> </property> <property name="suffix"> <value>.jsp</value> </property> <property name="order"> <value>0</value> </property> </bean> <!-- package shortened --> <bean id="sessionFactory" class="o.s.o.h3.a.AnnotationSessionFactoryBean"> <property name="dataSource" ref="dataSource" /> <property name="annotatedClasses"> <list> <value>orion.core.models.Question</value> <value>orion.core.models.User</value> <value>orion.core.models.Space</value> <value>orion.core.models.UserSkill</value> <value>orion.core.models.Question</value> <value>orion.core.models.Rating</value> </list> </property> <property name="hibernateProperties"> <props> <prop key="hibernate.dialect">${hibernate.dialect}</prop> <prop key="hibernate.show_sql">${hibernate.show_sql}</prop> <prop key="hibernate.hbm2ddl.auto">${hibernate.hbm2ddl.auto}</prop> </props> </property> </bean> <bean id="hibernateTransactionManager" class="org.springframework.orm.hibernate3.HibernateTransactionManager"> <property name="sessionFactory" ref="sessionFactory" /> </bean> Any clue?

    Read the article

  • Too many connections to 212.192.255.240

    - by Castor
    Recently, my Internet slowed down drastically. I downloaded a tool to see the TCP/IP connections from my Vista computer. I found out that a lot TCP/IP connections are being connected to 212.192.255.240 through SVCHost. It seems that it is trying to connect to different ports. I think that my computer is being infected with some kind of malware etc. But I am not sure how to get rid of it. I did a little bit of research on this IP but found nothing. Any suggestions are highly appreciated. UPDATE: This is the HiJackThis log file and I can't find any thing weird. Also, the program is also trying to create connections to 91.205.127.63, which is also from Russia. Logfile of Trend Micro HijackThis v2.0.4 Scan saved at 18:20:54 PM, on 4/29/2010 Platform: Windows Vista SP2 (WinNT 6.00.1906) MSIE: Internet Explorer v8.00 (8.00.6001.18882) Boot mode: Normal Running processes: C:\Windows\SYSTEM32\taskeng.exe C:\Windows\system32\Dwm.exe C:\Windows\SYSTEM32\Taskmgr.exe C:\Windows\explorer.exe C:\Windows\System32\igfxpers.exe C:\Program Files\Alwil Software\Avast4\ashDisp.exe C:\Program Files\Software602\Print2PDF\Print2PDF.exe C:\Windows\system32\igfxsrvc.exe C:\Program Files\VertrigoServ\Vertrigo.exe C:\Program Files\Java\jre6\bin\jusched.exe C:\Program Files\Google\GoogleToolbarNotifier\GoogleToolbarNotifier.exe C:\Windows\system32\wbem\unsecapp.exe C:\Program Files\Windows Media Player\wmpnscfg.exe C:\Program Files\X-NetStat Professional\xns5.exe C:\Program Files\Mozilla Firefox\firefox.exe C:\Program Files\HP\Digital Imaging\smart web printing\hpswp_clipbook.exe C:\Windows\system32\cmd.exe C:\Program Files\Trend Micro\HiJackThis\HiJackThis.exe R0 - HKCU\Software\Microsoft\Internet Explorer\Main,Start Page = about:blank R1 - HKLM\Software\Microsoft\Internet Explorer\Main,Default_Search_URL = http://go.microsoft.com/fwlink/?LinkId=54896 R1 - HKLM\Software\Microsoft\Internet Explorer\Main,Search Page = http://go.microsoft.com/fwlink/?LinkId=54896 R0 - HKLM\Software\Microsoft\Internet Explorer\Search,CustomizeSearch = R1 - HKCU\Software\Microsoft\Windows\CurrentVersion\Internet Settings,ProxyServer = 10.0.0.30:8118 R0 - HKCU\Software\Microsoft\Internet Explorer\Toolbar,LinksFolderName = R3 - URLSearchHook: Yahoo! Toolbar - {EF99BD32-C1FB-11D2-892F-0090271D4F88} - C:\Program Files\Yahoo!\Companion\Installs\cpn0\yt.dll F2 - REG:system.ini: Shell=explorer.exe rundll32.exe O2 - BHO: &Yahoo! Toolbar Helper - {02478D38-C3F9-4efb-9B51-7695ECA05670} - C:\Program Files\Yahoo!\Companion\Installs\cpn0\yt.dll O2 - BHO: HP Print Enhancer - {0347C33E-8762-4905-BF09-768834316C61} - C:\Program Files\HP\Digital Imaging\Smart Web Printing\hpswp_printenhancer.dll O2 - BHO: AcroIEHelperStub - {18DF081C-E8AD-4283-A596-FA578C2EBDC3} - C:\Program Files\Common Files\Adobe\Acrobat\ActiveX\AcroIEHelperShim.dll O2 - BHO: RoboForm BHO - {724d43a9-0d85-11d4-9908-00400523e39a} - C:\Program Files\Siber Systems\AI RoboForm\roboform.dll O2 - BHO: Groove GFS Browser Helper - {72853161-30C5-4D22-B7F9-0BBC1D38A37E} - C:\PROGRA~1\MICROS~3\Office12\GRA8E1~1.DLL O2 - BHO: Google Toolbar Helper - {AA58ED58-01DD-4d91-8333-CF10577473F7} - C:\Program Files\Google\Google Toolbar\GoogleToolbar_32.dll O2 - BHO: Google Toolbar Notifier BHO - {AF69DE43-7D58-4638-B6FA-CE66B5AD205D} - C:\Program Files\Google\GoogleToolbarNotifier\5.5.4723.1820\swg.dll O2 - BHO: Java(tm) Plug-In 2 SSV Helper - {DBC80044-A445-435b-BC74-9C25C1C588A9} - C:\Program Files\Java\jre6\bin\jp2ssv.dll O2 - BHO: Google Gears Helper - {E0FEFE40-FBF9-42AE-BA58-794CA7E3FB53} - C:\Program Files\Google\Google Gears\Internet Explorer\0.5.36.0\gears.dll O2 - BHO: SingleInstance Class - {FDAD4DA1-61A2-4FD8-9C17-86F7AC245081} - C:\Program Files\Yahoo!\Companion\Installs\cpn0\YTSingleInstance.dll O2 - BHO: HP Smart BHO Class - {FFFFFFFF-CF4E-4F2B-BDC2-0E72E116A856} - C:\Program Files\HP\Digital Imaging\Smart Web Printing\hpswp_BHO.dll O3 - Toolbar: Yahoo! Toolbar - {EF99BD32-C1FB-11D2-892F-0090271D4F88} - C:\Program Files\Yahoo!\Companion\Installs\cpn0\yt.dll O3 - Toolbar: &RoboForm - {724d43a0-0d85-11d4-9908-00400523e39a} - C:\Program Files\Siber Systems\AI RoboForm\roboform.dll O3 - Toolbar: Google Toolbar - {2318C2B1-4965-11d4-9B18-009027A5CD4F} - C:\Program Files\Google\Google Toolbar\GoogleToolbar_32.dll O4 - HKLM\..\Run: [RtHDVCpl] C:\Program Files\Realtek\Audio\HDA\RtHDVCpl.exe -s O4 - HKLM\..\Run: [IgfxTray] C:\Windows\system32\igfxtray.exe O4 - HKLM\..\Run: [Persistence] C:\Windows\system32\igfxpers.exe O4 - HKLM\..\Run: [avast!] C:\PROGRA~1\ALWILS~1\Avast4\ashDisp.exe O4 - HKLM\..\Run: [Print2PDF Print Monitor] "C:\Program Files\Software602\Print2PDF\Print2PDF.exe" /server O4 - HKLM\..\Run: [VertrigoServ] "C:\Program Files\VertrigoServ\Vertrigo.exe" O4 - HKLM\..\Run: [SunJavaUpdateSched] "C:\Program Files\Java\jre6\bin\jusched.exe" O4 - HKLM\..\Run: [Adobe Reader Speed Launcher] "C:\Program Files\Adobe\Reader 9.0\Reader\Reader_sl.exe" O4 - HKLM\..\Run: [Adobe ARM] "C:\Program Files\Common Files\Adobe\ARM\1.0\AdobeARM.exe" O4 - HKLM\..\Run: [Google Quick Search Box] "C:\Program Files\Google\Quick Search Box\GoogleQuickSearchBox.exe" /autorun O4 - HKLM\..\Run: [iTunesHelper] "C:\Program Files\iTunes\iTunesHelper.exe" O4 - HKLM\..\Run: [QuickTime Task] "C:\Program Files\QuickTime\QTTask.exe" -atboottime O4 - HKCU\..\Run: [swg] "C:\Program Files\Google\GoogleToolbarNotifier\GoogleToolbarNotifier.exe" O4 - HKCU\..\Run: [CCProxy] C:\CCProxy\CCProxy.exe O4 - HKCU\..\Run: [AlcoholAutomount] "C:\Program Files\Alcohol Soft\Alcohol 120\axcmd.exe" /automount O4 - HKCU\..\Run: [RoboForm] "C:\Program Files\Siber Systems\AI RoboForm\RoboTaskBarIcon.exe" O4 - HKCU\..\Run: [FileHippo.com] "C:\Program Files\filehippo.com\UpdateChecker.exe" /background O4 - HKUS\S-1-5-19\..\Run: [Sidebar] %ProgramFiles%\Windows Sidebar\Sidebar.exe /detectMem (User 'LOCAL SERVICE') O4 - HKUS\S-1-5-19\..\Run: [WindowsWelcomeCenter] rundll32.exe oobefldr.dll,ShowWelcomeCenter (User 'LOCAL SERVICE') O4 - HKUS\S-1-5-20\..\Run: [Sidebar] %ProgramFiles%\Windows Sidebar\Sidebar.exe /detectMem (User 'NETWORK SERVICE') O4 - Startup: AutorunsDisabled O4 - Startup: Locate32 Autorun.lnk = C:\Program Files\Locate\Locate32.exe O4 - Startup: OneNote Table Of Contents.onetoc2 O8 - Extra context menu item: Add to Google Photos Screensa&ver - res://C:\Windows\system32\GPhotos.scr/200 O8 - Extra context menu item: Customize Menu - file://C:\Program Files\Siber Systems\AI RoboForm\RoboFormComCustomizeIEMenu.html O8 - Extra context menu item: E&xport to Microsoft Excel - res://C:\PROGRA~1\MICROS~3\Office14\EXCEL.EXE/3000 O8 - Extra context menu item: Fill Forms - file://C:\Program Files\Siber Systems\AI RoboForm\RoboFormComFillForms.html O8 - Extra context menu item: Google Sidewiki... - res://C:\Program Files\Google\Google Toolbar\Component\GoogleToolbarDynamic_mui_en_96D6FF0C6D236BF8.dll/cmsidewiki.html O8 - Extra context menu item: RoboForm Toolbar - file://C:\Program Files\Siber Systems\AI RoboForm\RoboFormComShowToolbar.html O8 - Extra context menu item: S&end to OneNote - res://C:\PROGRA~1\MICROS~3\Office14\ONBttnIE.dll/105 O8 - Extra context menu item: Save Forms - file://C:\Program Files\Siber Systems\AI RoboForm\RoboFormComSavePass.html O9 - Extra button: (no name) - {09C04DA7-5B76-4EBC-BBEE-B25EAC5965F5} - C:\Program Files\Google\Google Gears\Internet Explorer\0.5.36.0\gears.dll O9 - Extra 'Tools' menuitem: &Gears Settings - {09C04DA7-5B76-4EBC-BBEE-B25EAC5965F5} - C:\Program Files\Google\Google Gears\Internet Explorer\0.5.36.0\gears.dll O9 - Extra button: Send to OneNote - {2670000A-7350-4f3c-8081-5663EE0C6C49} - C:\PROGRA~1\MICROS~3\Office12\ONBttnIE.dll O9 - Extra 'Tools' menuitem: S&end to OneNote - {2670000A-7350-4f3c-8081-5663EE0C6C49} - C:\PROGRA~1\MICROS~3\Office12\ONBttnIE.dll O9 - Extra button: Fill Forms - {320AF880-6646-11D3-ABEE-C5DBF3571F46} - file://C:\Program Files\Siber Systems\AI RoboForm\RoboFormComFillForms.html O9 - Extra 'Tools' menuitem: Fill Forms - {320AF880-6646-11D3-ABEE-C5DBF3571F46} - file://C:\Program Files\Siber Systems\AI RoboForm\RoboFormComFillForms.html O9 - Extra button: Save - {320AF880-6646-11D3-ABEE-C5DBF3571F49} - file://C:\Program Files\Siber Systems\AI RoboForm\RoboFormComSavePass.html O9 - Extra 'Tools' menuitem: Save Forms - {320AF880-6646-11D3-ABEE-C5DBF3571F49} - file://C:\Program Files\Siber Systems\AI RoboForm\RoboFormComSavePass.html O9 - Extra button: Print2PDF - {5B7027AD-AA6D-40df-8F56-9560F277D2A5} - C:\Program Files\Software602\Print2PDF\Print602.dll O9 - Extra 'Tools' menuitem: Print2PDF - {5B7027AD-AA6D-40df-8F56-9560F277D2A5} - C:\Program Files\Software602\Print2PDF\Print602.dll O9 - Extra button: RoboForm - {724d43aa-0d85-11d4-9908-00400523e39a} - file://C:\Program Files\Siber Systems\AI RoboForm\RoboFormComShowToolbar.html O9 - Extra 'Tools' menuitem: RoboForm Toolbar - {724d43aa-0d85-11d4-9908-00400523e39a} - file://C:\Program Files\Siber Systems\AI RoboForm\RoboFormComShowToolbar.html O9 - Extra button: Research - {92780B25-18CC-41C8-B9BE-3C9C571A8263} - C:\PROGRA~1\MICROS~3\Office12\REFIEBAR.DLL O9 - Extra button: Show or hide HP Smart Web Printing - {DDE87865-83C5-48c4-8357-2F5B1AA84522} - C:\Program Files\HP\Digital Imaging\Smart Web Printing\hpswp_BHO.dll O17 - HKLM\System\CCS\Services\Tcpip\..\{A80AB385-7767-4B5C-AF97-DBD65B29D8D1}: NameServer = 218.248.255.146 218.248.255.212 O17 - HKLM\System\CCS\Services\Tcpip\..\{D10402C1-9CDE-4582-A6B7-6C0D33B0E7BC}: NameServer = 218.248.255.146,218.248.255.212 O18 - Protocol: grooveLocalGWS - {88FED34C-F0CA-4636-A375-3CB6248B04CD} - C:\PROGRA~1\MICROS~3\Office12\GR99D3~1.DLL O22 - SharedTaskScheduler: Component Categories cache daemon - {8C7461EF-2B13-11d2-BE35-3078302C2030} - C:\Windows\system32\browseui.dll O23 - Service: Apple Mobile Device - Apple Inc. - C:\Program Files\Common Files\Apple\Mobile Device Support\bin\AppleMobileDeviceService.exe O23 - Service: avast! iAVS4 Control Service (aswUpdSv) - ALWIL Software - C:\Program Files\Alwil Software\Avast4\aswUpdSv.exe O23 - Service: avast! Antivirus - ALWIL Software - C:\Program Files\Alwil Software\Avast4\ashServ.exe O23 - Service: avast! Mail Scanner - ALWIL Software - C:\Program Files\Alwil Software\Avast4\ashMaiSv.exe O23 - Service: avast! Web Scanner - ALWIL Software - C:\Program Files\Alwil Software\Avast4\ashWebSv.exe O23 - Service: Bonjour Service - Apple Inc. - C:\Program Files\Bonjour\mDNSResponder.exe O23 - Service: CCProxy - Youngzsoft - C:\CCProxy\CCProxy.exe O23 - Service: Google Update Service (gupdate1c9c328490dac0) (gupdate1c9c328490dac0) - Google Inc. - C:\Program Files\Google\Update\GoogleUpdate.exe O23 - Service: Google Software Updater (gusvc) - Google - C:\Program Files\Google\Common\Google Updater\GoogleUpdaterService.exe O23 - Service: Distributed Transaction Coordinator MSDTCwercplsupport (MSDTCwercplsupport) - Unknown owner - C:\Windows\system32\acluiz.exe O23 - Service: Realtek Audio Service (RtkAudioService) - Realtek Semiconductor - C:\Windows\RtkAudioService.exe O23 - Service: StarWind AE Service (StarWindServiceAE) - Rocket Division Software - C:\Program Files\Alcohol Soft\Alcohol 120\StarWind\StarWindServiceAE.exe O23 - Service: SuperProServer - Unknown owner - C:\Windows\spnsrvnt.exe (file missing) O23 - Service: Vertrigo_Apache - Apache Software Foundation - C:\Program Files\VertrigoServ\apache\bin\v_apache.exe O23 - Service: Vertrigo_MySQL - Unknown owner - C:\Program Files\VertrigoServ\mysql\bin\v_mysqld.exe -- End of file - 10965 bytes enter code here enter code here

    Read the article

  • Exception Security Context token in WCF

    - by Alhambra Eidos
    Hi all I'm using Service WCF, and I get the following error: "The security context token is expired or is not valid. The message was not processed." Client config <endpoint address="http://probiz:49610/GestionOrganizacion.svc" binding="wsHttpBinding" bindingConfiguration="WSHttpBinding_IOrganizacion" contract="CarWin.ServiceContracts.Interfaces.IOrganizacion" behaviorConfiguration="NewBehavior" name="PRO_WSHttpBinding_IOrganizacion"> <identity> <dns value="localhost" /> </identity> <binding name="WSHttpBinding_IOrganizacion" closeTimeout="00:30:00" openTimeout="00:30:00" receiveTimeout="00:30:00" sendTimeout="00:30:00" bypassProxyOnLocal="false" transactionFlow="false" hostNameComparisonMode="StrongWildcard" maxBufferPoolSize="2147483647" maxReceivedMessageSize="2147483647" messageEncoding="Text" textEncoding="utf-8" useDefaultWebProxy="true" allowCookies="false"> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="2147483647" maxBytesPerRead="4096" maxNameTableCharCount="16384" /> <reliableSession ordered="true" inactivityTimeout="00:10:00" enabled="false" /> <security mode="Message"> <transport clientCredentialType="Windows" proxyCredentialType="None" realm="" /> <message clientCredentialType="Windows" negotiateServiceCredential="true" algorithmSuite="Default" establishSecurityContext="true" /> </security> </binding> more config <endpointBehaviors> <behavior name="NewBehavior"> <dataContractSerializer maxItemsInObjectGraph="2147483647" /> </behavior> </endpointBehaviors> Thanks in advanced, greetings

    Read the article

  • WCF Ria Services Error : Load operation failed for query 'GetTranslationProgress'

    - by Manoj
    Hello, I am using WCF Ria Services beta in my silverlight application. I have a long running task on the server which keeps writing its status to a database table. I have a "GetTranslationProgress" Ria Service Query method which queries the state to get the status. This query method is called continuously at a regular interval of 5 seconds until a status = Finished or Error is retrieved. In some cases if the task on the server is running for a long duration 2-3 minutes then I receives this error:- Load operation failed for query 'GetTranslationProgress'. The server did not provide a meaningful reply; this might be caused by a contract mismatch, a premature session shutdown or an internal server error. at System.Windows.Ria.OperationBase.Complete(Exception error) at System.Windows.Ria.LoadOperation.Complete(Exception error) at System.Windows.Ria.DomainContext.CompleteLoad(IAsyncResult asyncResult) at System.Windows.Ria.DomainContext.<c_DisplayClass17.b_13(Object ) The error occurs intermittently and I have no idea what could be causing this error. I have checked most of the forums and sites but I have not been able to find out any solution to this issue. Please help.

    Read the article

  • Android Camera without Preview

    - by eyurdakul
    I am writing an android 1.5 application which starts just after boot-up. This is a service and should take a picture without preview. This app will log the light density in some areas whatever. I was able to take a picture but the picture was black. After googling like crazy, i came across a bug thread about it. If you don't generate a preview, the image will be black since android camera needs preview to setup exposure and focus. I've created a surfaceview and listener but the onSurfaceCreated event never gets fired. I guess the reason is, the surface is not being created visually. I've also seen some examples of calling the camera statically with MediaStore.CAPTURE_OR_SOMETHING which takes a picture and saves in the desired folder with two lines of code but it doesn't take a picture too. Do i need to use ipc and bindservice to call this function or do you have any suggestion to achieve my goal (taking a picture without preview) and if so, would you give me a small piece of code as example?

    Read the article

  • Lot's of errors by insserv on apt-get operations after trying to install tomcat

    - by yankee
    I wanted to install tomcat on my Debian 6.0.4 machine. I tried apt-get install tomcat6-user which worked fine. But then I changed my mind about the user installation and wanted to install the package tomcat6. This resulted in a bunch of errors (see below). Now whatever I try to do with apt-get or with aptitude (trying to remove tomcat6-user, trying to remove tomcat6, trying to perform an apt-get upgrade,...) just results in the same list of errors. How did I manage that? And how can I fix it? # apt-get install tomcat6 Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: authbind Suggested packages: tomcat6-docs tomcat6-admin tomcat6-examples libtcnative-1 The following NEW packages will be installed: authbind tomcat6 0 upgraded, 2 newly installed, 0 to remove and 32 not upgraded. Need to get 56.6 kB of archives. After this operation, 442 kB of additional disk space will be used. Do you want to continue [Y/n]? Get:1 http://mirror.hetzner.de/debian/packages/ squeeze/main authbind amd64 1.2.0 [17.3 kB] Get:2 http://mirror.hetzner.de/debian/security/ squeeze/updates/main tomcat6 all 6.0.35-1+squeeze2 [39.3 kB] Fetched 56.6 kB in 0s (441 kB/s) Preconfiguring packages ... Selecting previously deselected package authbind. (Reading database ... 34717 files and directories currently installed.) Unpacking authbind (from .../authbind_1.2.0_amd64.deb) ... Selecting previously deselected package tomcat6. Unpacking tomcat6 (from .../tomcat6_6.0.35-1+squeeze2_all.deb) ... Processing triggers for man-db ... Setting up authbind (1.2.0) ... Setting up tomcat6 (6.0.35-1+squeeze2) ... Creating config file /etc/default/tomcat6 with new version Adding system user `tomcat6' (UID 108) ... Adding new user `tomcat6' (UID 108) with group `tomcat6' ... Not creating home directory `/usr/share/tomcat6'. insserv: warning: script 'S99iptables-custom' missing LSB tags and overrides insserv: warning: script 'iptables-custom' missing LSB tags and overrides insserv: There is a loop at service iptables-custom if started insserv: There is a loop between service rmnologin and mountnfs if started insserv: loop involving service mountnfs at depth 6 insserv: loop involving service networking at depth 5 insserv: loop involving service kbd at depth 9 insserv: There is a loop between service rmnologin and mountall-bootclean if started insserv: loop involving service mountall-bootclean at depth 5 insserv: loop involving service mountall at depth 4 insserv: There is a loop between service iptables-custom and lvm2 if started insserv: loop involving service lvm2 at depth 2 insserv: loop involving service udev at depth 1 insserv: There is a loop at service rmnologin if started insserv: There is a loop between service iptables-custom and checkroot if started insserv: loop involving service checkroot at depth 2 insserv: loop involving service keyboard-setup at depth 1 insserv: Starting iptables-custom depends on rmnologin and therefore on system facility `$all' which can not be true! insserv: Starting iptables-custom depends on rmnologin and therefore on system facility `$all' which can not be true! insserv: Starting iptables-custom depends on rmnologin and therefore on system facility `$all' which can not be true! insserv: Starting iptables-custom depends on rmnologin and therefore on system facility `$all' which can not be true! insserv: Starting iptables-custom depends on rmnologin and therefore on system facility `$all' which can not be true! insserv: Starting iptables-custom depends on rmnologin and therefore on system facility `$all' which can not be true! insserv: Starting iptables-custom depends on rmnologin and therefore on system facility `$all' which can not be true! insserv: Starting iptables-custom depends on rmnologin and therefore on system facility `$all' which can not be true! insserv: Starting iptables-custom depends on rmnologin and therefore on system facility `$all' which can not be true! insserv: Starting iptables-custom depends on rmnologin and therefore on system facility `$all' which can not be true! insserv: Starting iptables-custom depends on rmnologin and therefore on system facility `$all' which can not be true! insserv: Starting iptables-custom depends on rmnologin and therefore on system facility `$all' which can not be true! insserv: Starting iptables-custom depends on rmnologin and therefore on system facility `$all' which can not be true! insserv: Starting iptables-custom depends on rmnologin and therefore on system facility `$all' which can not be true! insserv: Starting iptables-custom depends on rmnologin and therefore on system facility `$all' which can not be true! insserv: Starting iptables-custom depends on rmnologin and therefore on system facility `$all' which can not be true! insserv: Starting iptables-custom depends on rmnologin and therefore on system facility `$all' which can not be true! insserv: Starting iptables-custom depends on rmnologin and therefore on system facility `$all' which can not be true! insserv: Starting iptables-custom depends on rmnologin and therefore on system facility `$all' which can not be true! insserv: Starting iptables-custom depends on rmnologin and therefore on system facility `$all' which can not be true! insserv: Starting iptables-custom depends on rmnologin and therefore on system facility `$all' which can not be true! insserv: Starting iptables-custom depends on rmnologin and therefore on system facility `$all' which can not be true! insserv: Starting iptables-custom depends on rmnologin and therefore on system facility `$all' which can not be true! insserv: Starting iptables-custom depends on rmnologin and therefore on system facility `$all' which can not be true! insserv: Starting iptables-custom depends on rmnologin and therefore on system facility `$all' which can not be true! insserv: Starting iptables-custom depends on rmnologin and therefore on system facility `$all' which can not be true! insserv: Starting iptables-custom depends on rmnologin and therefore on system facility `$all' which can not be true! insserv: Starting iptables-custom depends on rmnologin and therefore on system facility `$all' which can not be true! insserv: Starting iptables-custom depends on rmnologin and therefore on system facility `$all' which can not be true! insserv: Starting iptables-custom depends on rmnologin and therefore on system facility `$all' which can not be true! insserv: Starting iptables-custom depends on rmnologin and therefore on system facility `$all' which can not be true! insserv: Starting iptables-custom depends on rmnologin and therefore on system facility `$all' which can not be true! insserv: Starting iptables-custom depends on rmnologin and therefore on system facility `$all' which can not be true! insserv: Starting iptables-custom depends on rmnologin and therefore on system facility `$all' which can not be true! insserv: Starting iptables-custom depends on rmnologin and therefore on system facility `$all' which can not be true! insserv: Starting iptables-custom depends on rmnologin and therefore on system facility `$all' which can not be true! insserv: Starting iptables-custom depends on rmnologin and therefore on system facility `$all' which can not be true! insserv: Starting iptables-custom depends on rmnologin and therefore on system facility `$all' which can not be true! insserv: Starting iptables-custom depends on rmnologin and therefore on system facility `$all' which can not be true! insserv: Starting iptables-custom depends on rmnologin and therefore on system facility `$all' which can not be true! insserv: Starting iptables-custom depends on rmnologin and therefore on system facility `$all' which can not be true! insserv: Starting iptables-custom depends on rmnologin and therefore on system facility `$all' which can not be true! insserv: Starting iptables-custom depends on rmnologin and therefore on system facility `$all' which can not be true! insserv: Starting iptables-custom depends on rmnologin and therefore on system facility `$all' which can not be true! insserv: Starting iptables-custom depends on rmnologin and therefore on system facility `$all' which can not be true! insserv: Starting iptables-custom depends on rmnologin and therefore on system facility `$all' which can not be true! insserv: Starting iptables-custom depends on rmnologin and therefore on system facility `$all' which can not be true! insserv: Starting iptables-custom depends on rmnologin and therefore on system facility `$all' which can not be true! insserv: Starting iptables-custom depends on rmnologin and therefore on system facility `$all' which can not be true! insserv: Max recursions depth 99 reached insserv: loop involving service courier-imap-ssl at depth 1 insserv: Starting iptables-custom depends on rmnologin and therefore on system facility `$all' which can not be true! insserv: Starting iptables-custom depends on rmnologin and therefore on system facility `$all' which can not be true! insserv: Starting iptables-custom depends on rmnologin and therefore on system facility `$all' which can not be true! insserv: Starting iptables-custom depends on rmnologin and therefore on system facility `$all' which can not be true! insserv: Starting iptables-custom depends on rmnologin and therefore on system facility `$all' which can not be true! insserv: Starting iptables-custom depends on rmnologin and therefore on system facility `$all' which can not be true! insserv: Starting iptables-custom depends on rmnologin and therefore on system facility `$all' which can not be true! insserv: Starting iptables-custom depends on rmnologin and therefore on system facility `$all' which can not be true! insserv: Starting iptables-custom depends on rmnologin and therefore on system facility `$all' which can not be true! insserv: Starting iptables-custom depends on rmnologin and therefore on system facility `$all' which can not be true! insserv: Starting iptables-custom depends on rmnologin and therefore on system facility `$all' which can not be true! insserv: Starting iptables-custom depends on rmnologin and therefore on system facility `$all' which can not be true! insserv: Starting iptables-custom depends on rmnologin and therefore on system facility `$all' which can not be true! insserv: Starting iptables-custom depends on rmnologin and therefore on system facility `$all' which can not be true! insserv: Starting iptables-custom depends on rmnologin and therefore on system facility `$all' which can not be true! insserv: Starting iptables-custom depends on rmnologin and therefore on system facility `$all' which can not be true! insserv: Starting iptables-custom depends on rmnologin and therefore on system facility `$all' which can not be true! insserv: Starting iptables-custom depends on rmnologin and therefore on system facility `$all' which can not be true! insserv: Starting iptables-custom depends on rmnologin and therefore on system facility `$all' which can not be true! insserv: Starting iptables-custom depends on rmnologin and therefore on system facility `$all' which can not be true! insserv: Starting iptables-custom depends on rmnologin and therefore on system facility `$all' which can not be true! insserv: Starting iptables-custom depends on rmnologin and therefore on system facility `$all' which can not be true! insserv: Starting iptables-custom depends on rmnologin and therefore on system facility `$all' which can not be true! insserv: Starting iptables-custom depends on rmnologin and therefore on system facility `$all' which can not be true! insserv: Starting iptables-custom depends on rmnologin and therefore on system facility `$all' which can not be true! insserv: loop involving service hwclockfirst at depth 2 insserv: loop involving service mountoverflowtmp at depth 9 insserv: loop involving service checkfs at depth 6 insserv: loop involving service mdadm-raid at depth 4 insserv: loop involving service hostname at depth 3 insserv: There is a loop between service iptables-custom and ifupdown-clean if started insserv: loop involving service ifupdown-clean at depth 5 insserv: There is a loop between service rmnologin and mountall if started insserv: There is a loop between service iptables-custom and mountdevsubfs if started insserv: loop involving service mountdevsubfs at depth 1 insserv: Starting iptables-custom depends on rmnologin and therefore on system facility `$all' which can not be true! insserv: Starting iptables-custom depends on rmnologin and therefore on system facility `$all' which can not be true! insserv: Starting iptables-custom depends on rmnologin and therefore on system facility `$all' which can not be true! insserv: Starting iptables-custom depends on rmnologin and therefore on system facility `$all' which can not be true! insserv: Starting iptables-custom depends on rmnologin and therefore on system facility `$all' which can not be true! insserv: Starting iptables-custom depends on rmnologin and therefore on system facility `$all' which can not be true! insserv: Starting iptables-custom depends on rmnologin and therefore on system facility `$all' which can not be true! insserv: Starting iptables-custom depends on rmnologin and therefore on system facility `$all' which can not be true! insserv: Starting iptables-custom depends on rmnologin and therefore on system facility `$all' which can not be true! insserv: Starting iptables-custom depends on rmnologin and therefore on system facility `$all' which can not be true! insserv: Starting iptables-custom depends on rmnologin and therefore on system facility `$all' which can not be true! insserv: Starting iptables-custom depends on rmnologin and therefore on system facility `$all' which can not be true! insserv: Starting iptables-custom depends on rmnologin and therefore on system facility `$all' which can not be true! insserv: Starting iptables-custom depends on rmnologin and therefore on system facility `$all' which can not be true! insserv: Starting iptables-custom depends on rmnologin and therefore on system facility `$all' which can not be true! insserv: Starting iptables-custom depends on rmnologin and therefore on system facility `$all' which can not be true! insserv: Starting iptables-custom depends on rmnologin and therefore on system facility `$all' which can not be true! insserv: Starting iptables-custom depends on rmnologin and therefore on system facility `$all' which can not be true! insserv: Starting iptables-custom depends on rmnologin and therefore on system facility `$all' which can not be true! insserv: loop involving service mtab at depth 6 insserv: There is a loop between service rmnologin and mountoverflowtmp if started insserv: Starting iptables-custom depends on rmnologin and therefore on system facility `$all' which can not be true! insserv: exiting now without changing boot order! update-rc.d: error: insserv rejected the script header dpkg: error processing tomcat6 (--configure): subprocess installed post-installation script returned error exit status 1 configured to not write apport reports Errors were encountered while processing: tomcat6 E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • WCF Streaming not working at server

    - by Radhi
    hi, i have used WCF service to transfer large files in chunks to the server for that i have reference this article http://kjellsj.blogspot.com/2007/02/wcf-streaming-upload-files-over-http.html i have configured my application on IIS on my machine. its work fine here. it allows upto 64mb file upload but when we have published the site. it allows only maximum 30Mb file if i try to upload more than that i got error 404 - resource not found. here is the binding config i have used. <basicHttpBinding> <!-- buffer: 64KB; max size: 64MB --> <binding name="FileTransferServicesBinding" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00" transferMode="Streamed" messageEncoding="Mtom" maxBufferSize="65536" maxReceivedMessageSize="67108864"> <security mode="None"> <transport clientCredentialType="None"/> </security> </binding> </basicHttpBinding> please suggest me where i am missing anything. and if required more code please let me know -thanks in advance

    Read the article

  • KISS: Simple C# application which communicates with a RESTful web service.

    - by Workshop Alex
    Following the KISS principle, I suddenly realised the following: In .NET, you can use the Entity Model Framework to wrap around a database. This model can be exposed as a web service through WCF. This web service would have a very standardized definition. A client application could be created which could consume any such RESTful web service. I don't want to re-invent the wheel and it wouldn't surprise me if someone has already done this, so my question is simple: Has anyone already created a simple (desktop, not web) client application that can consume a RESTful service that's based on the Entity Framework and which will allow the user to read and write data directly to this service? Otherwise, I'll just have to "invent" this myself. :-)Problem is, the database layer and RESTful service is already finished. The RESTful service will only stay in the project during it's development phase, since we can use the database-layer assembly directly from the web applications that are build around it. When the web application is deployed, the RESTful services are just kept out of the deployment. But the database has a lot of data to manage over nearly 50 tables. When developing against a local database, we can have straight access to the database so I wouldn't need this tool for this. When it's deployed, the web application would be the only way to access the data so I could not use this tool. But we're also having a test phase where the database is stored on another system outside the local domain and this database is not available for developers. Only administrators have direct access to this database, making tests a bit more complex. However, through the RESTful service, I can still access the data directly. Thus, when some test goes wrong, I can repair the data through this connection or just create a copy of the data for tests on my local system. There's plenty of other functionality and it's even possible to just open the URL to a table service straight in Excel or XMLSpy to see the contents. But when I want to write something back, I have to write special code to do just that. A generic tool that would allow me to access the data and modify it would be easier. Since it's a generic setup around the ADO.NET Data services, this should be reasonable easy too. Thus, I can do it but hoped someone else has already done something similar. But it appears that there's no such tool made yet...

    Read the article

  • Importing MEF-Plugins into MVC Controllers

    - by Marks
    Hello. There are several examples of using MEF to plugin whole controller/view packages into a MVC application, but i didn't found one, using MEF to plugin funcional parts, used by other controllers. For example, think of a NewsService with a simple interface like interface INewsService { List<NewsItem> GetAllNews(); } That gets news wherever he wants, and returns them in a List of NewsItems. My page should load an exported INewsService and show the news on the page. But there is the problem. I cant just use [Import] in the controllers, as they are just created when they are needed. Edit: (Importing them to the main MVCApplication class doesn't work, becouse i cant access it from the controllers.) I think i found a way to access the main app via HttpContext.ApplicationInstance. But the Service object in this instance is null although it was created successfully in the Application_Start() method. Any idea why? So, how can i access the NewsService from within a controller? Thanks in advance, Marks

    Read the article

  • Need help with WCF design

    - by Jason
    I have been tasked with creating a set of web services. We are a Microsoft shop, so I will be using WCF for this project. There is an interesting design consideration that I haven't been able to figure out a solution for yet. I'll try to explain it with an example: My WCF service exposes a method named Foo(). 10 different users call Foo() at roughly the same time. I have 5 special resources called R1, R2, R3, R4, and R5. We don't really need to know what the resource is, other than the fact that a particular resource can only be in use by one caller at a time. Foo() is responsible to performing an action using one of these special resources. So, in a round-robin fashion, Foo() needs to find a resource that is not in use. If no resources are available, it must wait for one to be freed up. At first, this seems like an easy task. I could maybe create a singleton that keeps track of which resources are currently in use. The big problem is the fact that I need this solution to be viable in a web farm scenario. I'm sure there is a good solution to this problem, but I've just never run across this scenario before. I need some sort of resource tracker / provider that can be shared between multiple WCF hosts. Any ideas from the architects out there would be greatly appreciated!

    Read the article

  • SpeechRecognizer causes ANR... I need help with Android speech API.

    - by Dondo Chaka
    I'm trying to use Android's speech recognition package to record user speech and translate it to text. Unfortunately, when I attempt initiate listening, I get an ANR error that doesn't point to anything specific. As the SpeechRecognizer API indicates, a RuntimeException is thrown if you attempt to call it from the main thread. This would make me wonder if the processing was just too demanding... but I know that other applications use the Android API for this purpose and it is typically pretty snappy. java.lang.RuntimeException: SpeechRecognizer should be used only from the application's main thread Here is a (trimmed) sample of the code I'm trying to call from my service. Is this the proper approach? Thanks for taking the time to help. This has been a hurdle I haven't been able to get over yet. Intent intent = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH); intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL, RecognizerIntent.LANGUAGE_MODEL_FREE_FORM); intent.putExtra(RecognizerIntent.EXTRA_CALLING_PACKAGE, "com.domain.app"); SpeechRecognizer recognizer = SpeechRecognizer .createSpeechRecognizer(this.getApplicationContext()); RecognitionListener listener = new RecognitionListener() { @Override public void onResults(Bundle results) { ArrayList<String> voiceResults = results .getStringArrayList(RecognizerIntent.EXTRA_RESULTS); if (voiceResults == null) { Log.e(getString(R.string.log_label), "No voice results"); } else { Log.d(getString(R.string.log_label), "Printing matches: "); for (String match : voiceResults) { Log.d(getString(R.string.log_label), match); } } } @Override public void onReadyForSpeech(Bundle params) { Log.d(getString(R.string.log_label), "Ready for speech"); } @Override public void onError(int error) { Log.d(getString(R.string.log_label), "Error listening for speech: " + error); } @Override public void onBeginningOfSpeech() { Log.d(getString(R.string.log_label), "Speech starting"); } }; recognizer.setRecognitionListener(listener); recognizer.startListening(intent);

    Read the article

  • How can I return something other than an enum from an NServiceBus endpoint exposed as a WCF service?

    - by Todd Stout
    I have a service exposed as WCF via NServiceBus. Ultimately, I'd like to call to this service from silverlight. My WCF Service Interface looks like this: [ServiceContract] public interface ISettingsService { [OperationContract(Action = "http://tempuri.org/IWcfServiceOf_RequestSettingsMessage_SettingsResponseMessage/Process", ReplyAction = "http://tempuri.org/IWcfServiceOf_RequestSettingsMessage_SettingsResponseMessage/ProcessResponse") ] SettingsResponseMessage FetchSettings(RequestSettingsMessage request); } My NSB WCF service is defined as: public class CoreService : WcfService<RequestSettingsMessage, SettingsResponseMessage> { } When I invoke the FetchSettings method on the service, I get an exception: System.TypeInitializationException: The type initializer for 'NServiceBus.WcfSer vice`2' threw an exception. ---- System.InvalidOperationException: Centerlink.Services.Core.Msg.Settings.SettingsResponseMessage must be an enum representing error codes returned by the server. It seems that the WcfService< class is restricting the return type of a WCF method to be an enum. How can I have my service return something other than an enum? Do I need to create a custom implementation of NServiceBus.WcfService<?

    Read the article

  • VS 2010 Error “Object reference not set to an instance of an object” when adding Service Reference f

    - by Andy
    I have a VS2010 (RTM) solution which contains: WCF Service project Console WCF client project Class project for DataContracts and members Class project for some simple classes I successfully added a service reference in the console client project and ran the client. I then did a long dev cycle repeatedly modifying the service then updating console service reference. I then changed the namespace and assembly names for the projects as well as the .cs using references and app.config. I of course missed some things as it would not build so I eventually removed the project references and the service reference, cleaned and built successfully. I then attempted to add the service reference again, it discovered it but threw the “Object reference not set to an instance of an object” when OK'ing. Fix in answer below...

    Read the article

  • LLBLGen Pro feature highlights: automatic element name construction

    - by FransBouma
    (This post is part of a series of posts about features of the LLBLGen Pro system) One of the things one might take for granted but which has a huge impact on the time spent in an entity modeling environment is the way the system creates names for elements out of the information provided, in short: automatic element name construction. Element names are created in both directions of modeling: database first and model first and the more names the system can create for you without you having to rename them, the better. LLBLGen Pro has a rich, fine grained system for creating element names out of the meta-data available, which I'll describe more in detail below. First the model element related element naming features are highlighted, in the section Automatic model element naming features and after that I'll go more into detail about the relational model element naming features LLBLGen Pro has to offer in the section Automatic relational model element naming features. Automatic model element naming features When working database first, the element names in the model, e.g. entity names, entity field names and so on, are in general determined from the relational model element (e.g. table, table field) they're mapped on, as the model elements are reverse engineered from these relational model elements. It doesn't take rocket science to automatically name an entity Customer if the entity was created after reverse engineering a table named Customer. It gets a little trickier when the entity which was created by reverse engineering a table called TBL_ORDER_LINES has to be named 'OrderLine' automatically. Automatic model element naming also takes into effect with model first development, where some settings are used to provide you with a default name, e.g. in the case of navigator name creation when you create a new relationship. The features below are available to you in the Project Settings. Open Project Settings on a loaded project and navigate to Conventions -> Element Name Construction. Strippers! The above example 'TBL_ORDER_LINES' shows that some parts of the table name might not be needed for name creation, in this case the 'TBL_' prefix. Some 'brilliant' DBAs even add suffixes to table names, fragments you might not want to appear in the entity names. LLBLGen Pro offers you to define both prefix and suffix fragments to strip off of table, view, stored procedure, parameter, table field and view field names. In the example above, the fragment 'TBL_' is a good candidate for such a strip pattern. You can specify more than one pattern for e.g. the table prefix strip pattern, so even a really messy schema can still be used to produce clean names. Underscores Be Gone Another thing you might get rid of are underscores. After all, most naming schemes for entities and their classes use PasCal casing rules and don't allow for underscores to appear. LLBLGen Pro can automatically strip out underscores for you. It's an optional feature, so if you like the underscores, you're not forced to see them go: LLBLGen Pro will leave them alone when ordered to to so. PasCal everywhere... or not, your call LLBLGen Pro can automatically PasCal case names on word breaks. It determines word breaks in a couple of ways: a space marks a word break, an underscore marks a word break and a case difference marks a word break. It will remove spaces in all cases, and based on the underscore removal setting, keep or remove the underscores, and upper-case the first character of a word break fragment, and lower case the rest. Say, we keep the defaults, which is remove underscores and PasCal case always and strip the TBL_ fragment, we get with our example TBL_ORDER_LINES, after stripping TBL_ from the table name two word fragments: ORDER and LINES. The underscores are removed, the first character of each fragment is upper-cased, the rest lower-cased, so this results in OrderLines. Almost there! Pluralization and Singularization In general entity names are singular, like Customer or OrderLine so LLBLGen Pro offers a way to singularize the names. This will convert OrderLines, the result we got after the PasCal casing functionality, into OrderLine, exactly what we're after. Show me the patterns! There are other situations in which you want more flexibility. Say, you have an entity Customer and an entity Order and there's a foreign key constraint defined from the target of Order and the target of Customer. This foreign key constraint results in a 1:n relationship between the entities Customer and Order. A relationship has navigators mapped onto the relationship in both entities the relationship is between. For this particular relationship we'd like to have Customer as navigator in Order and Orders as navigator in Customer, so the relationship becomes Customer.Orders 1:n Order.Customer. To control the naming of these navigators for the various relationship types, LLBLGen Pro defines a set of patterns which allow you, using macros, to define how the auto-created navigator names will look like. For example, if you rather have Customer.OrderCollection, you can do so, by changing the pattern from {$EndEntityName$P} to {$EndEntityName}Collection. The $P directive makes sure the name is pluralized, which is not what you want if you're going for <EntityName>Collection, hence it's removed. When working model first, it's a given you'll create foreign key fields along the way when you define relationships. For example, you've defined two entities: Customer and Order, and they have their fields setup properly. Now you want to define a relationship between them. This will automatically create a foreign key field in the Order entity, which reflects the value of the PK field in Customer. (No worries if you hate the foreign key fields in your classes, on NHibernate and EF these can be hidden in the generated code if you want to). A specific pattern is available for you to direct LLBLGen Pro how to name this foreign key field. For example, if all your entities have Id as PK field, you might want to have a different name than Id as foreign key field. In our Customer - Order example, you might want to have CustomerId instead as foreign key name in Order. The pattern for foreign key fields gives you that freedom. Abbreviations... make sense of OrdNr and friends I already described word breaks in the PasCal casing paragraph, how they're used for the PasCal casing in the constructed name. Word breaks are used for another neat feature LLBLGen Pro has to offer: abbreviation support. Burt, your friendly DBA in the dungeons below the office has a hate-hate relationship with his keyboard: he can't stand it: typing is something he avoids like the plague. This has resulted in tables and fields which have names which are very short, but also very unreadable. Example: our TBL_ORDER_LINES example has a lovely field called ORD_NR. What you would like to see in your fancy new OrderLine entity mapped onto this table is a field called OrderNumber, not a field called OrdNr. What you also like is to not have to rename that field manually. There are better things to do with your time, after all. LLBLGen Pro has you covered. All it takes is to define some abbreviation - full word pairs and during reverse engineering model elements from tables/views, LLBLGen Pro will take care of the rest. For the ORD_NR field, you need two values: ORD as abbreviation and Order as full word, and NR as abbreviation and Number as full word. LLBLGen Pro will now convert every word fragment found with the word breaks which matches an abbreviation to the given full word. They're case sensitive and can be found in the Project Settings: Navigate to Conventions -> Element Name Construction -> Abbreviations. Automatic relational model element naming features Not everyone works database first: it may very well be the case you start from scratch, or have to add additional tables to an existing database. For these situations, it's key you have the flexibility that you can control the created table names and table fields without any work: let the designer create these names based on the entity model you defined and a set of rules. LLBLGen Pro offers several features in this area, which are described in more detail below. These features are found in Project Settings: navigate to Conventions -> Model First Development. Underscores, welcome back! Not every database is case insensitive, and not every organization requires PasCal cased table/field names, some demand all lower or all uppercase names with underscores at word breaks. Say you create an entity model with an entity called OrderLine. You work with Oracle and your organization requires underscores at word breaks: a table created from OrderLine should be called ORDER_LINE. LLBLGen Pro allows you to do that: with a simple checkbox you can order LLBLGen Pro to insert an underscore at each word break for the type of database you're working with: case sensitive or case insensitive. Checking the checkbox Insert underscore at word break case insensitive dbs will let LLBLGen Pro create a table from the entity called Order_Line. Half-way there, as there are still lower case characters there and you need all caps. No worries, see below Casing directives so everyone can sleep well at night For case sensitive databases and case insensitive databases there is one setting for each of them which controls the casing of the name created from a model element (e.g. a table created from an entity definition using the auto-mapping feature). The settings can have the following values: AsProjectElement, AllUpperCase or AllLowerCase. AsProjectElement is the default, and it keeps the casing as-is. In our example, we need to get all upper case characters, so we select AllUpperCase for the setting for case sensitive databases. This will produce the name ORDER_LINE. Sequence naming after a pattern Some databases support sequences, and using model-first development it's key to have sequences, when needed, to be created automatically and if possible using a name which shows where they're used. Say you have an entity Order and you want to have the PK values be created by the database using a sequence. The database you're using supports sequences (e.g. Oracle) and as you want all numeric PK fields to be sequenced, you have enabled this by the setting Auto assign sequences to integer pks. When you're using LLBLGen Pro's auto-map feature, to create new tables and constraints from the model, it will create a new table, ORDER, based on your settings I previously discussed above, with a PK field ID and it also creates a sequence, SEQ_ORDER, which is auto-assigns to the ID field mapping. The name of the sequence is created by using a pattern, defined in the Model First Development setting Sequence pattern, which uses plain text and macros like with the other patterns previously discussed. Grouping and schemas When you start from scratch, and you're working model first, the tables created by LLBLGen Pro will be in a catalog and / or schema created by LLBLGen Pro as well. If you use LLBLGen Pro's grouping feature, which allows you to group entities and other model elements into groups in the project (described in a future blog post), you might want to have that group name reflected in the schema name the targets of the model elements are in. Say you have a model with a group CRM and a group HRM, both with entities unique for these groups, e.g. Employee in HRM, Customer in CRM. When auto-mapping this model to create tables, you might want to have the table created for Employee in the HRM schema but the table created for Customer in the CRM schema. LLBLGen Pro will do just that when you check the setting Set schema name after group name to true (default). This gives you total control over where what is placed in the database from your model. But I want plural table names... and TBL_ prefixes! For now we follow best practices which suggest singular table names and no prefixes/suffixes for names. Of course that won't keep everyone happy, so we're looking into making it possible to have that in a future version. Conclusion LLBLGen Pro offers a variety of options to let the modeling system do as much work for you as possible. Hopefully you enjoyed this little highlight post and that it has given you new insights in the smaller features available to you in LLBLGen Pro, ones you might not have thought off in the first place. Enjoy!

    Read the article

  • SSAS deployment error: Internal error: Invalid enumeration value. Please call customer support! is not a valid value for this element.

    - by Kevin Shyr
    The first search on this error yielded some blog posts that says to check SQL server version.  It suggested that I couldn't deploy a SSAS project originally set for SSAS 2008 to a SSAS 2008 R2, which didn't make sense to me.  Combined with the fact that the error message was telling me to call customer support.  Why do I need to call customer support unless something catastrophic happened?Turns out that one of the file on the SQL server is corrupt.  I could simply delete the database on the SSAS server and re-run deployment.  Problem solved.SSAS errors in visual studio: Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Error      10           Internal error: Invalid enumeration value. Please call customer support! is not a valid value for this element.                                0              0              Error      11           An error occurred while parsing the 'StorageMode' element at line 1, column 10523 ('http://schemas.microsoft.com/analysisservices/2003/engine' namespace) under Load/ObjectDefinition/Dimension/StorageMode.                            0              0              Error      12           Errors in the metadata manager. An error occurred when instantiating a metadata object from the file, '\\?\E:\Program Files\Microsoft SQL Server\MSAS10_50.MSSQLSERVER\OLAP\Data\DWH Sales Facts.0.db\Competitor.48.dim.xml'.                        0              0

    Read the article

  • Get More Value From Your Oracle Premier Support Investment

    - by Get Proactive Customer Adoption Team
    Untitled Document The Return on Investment in Support Training I’m a typical software user. I’ve been using spreadsheets almost daily for the past 10 years or so. I know how to enter simple formulas, format cells, import files, and I can sort and filter. Sometimes I even use a pivot table. I never attended training. I learnt everything I know on the fly. Sometimes it was intuitive and easy, other times I had to spend minutes and even hours searching for a solution. Yet when I see what some other people can do with their spreadsheets, I know I’m utilizing maybe 15% of the functionality. Pity, one day I really have to sign up for training. Why haven’t I done it yet? Ah, you know, I’m a busy person, I have work to do. And if I need to use a feature that I am unfamiliar with, I’ll spend time on it only when I really need it. Now wait. When I recall how much time I spent trying to figure how things work compared to time I spent doing the productive work, I realize it was not insignificant. I’m unable to sum up all the time I spent ‘learning’ on the fly, but I’m sure it’s been days or even weeks. And after all this time, I’ve mastered 15% of its features. If only I had attended training years ago. That investment would have paid back 10 times! Working with My Oracle Support is no different. Our customers typically use simple search, create service requests, and download patches. They think they know how to use My Oracle Support. And they’re right. They know something but often they’re utilizing only a fragment of My Oracle Support’s potential. For the investment that has been made, using only a small subset of the capabilities offered in My Oracle Support leaves value on the table. There is much more available in My Oracle Support. Dozens of diagnostic tools and proactive health checks will keep verifying your Oracle environments against best practices that Oracle gathers every day thanks to our comprehensive knowledge management process. Automated patch recommendations will help prevent known issues, and upgrade planning and more is included in My Oracle Support. Why are you not utilizing all of these best practices, capabilities and tools? Is it because you don’t have time to invest 2-3 hours of your time to learn about the features? Simply because you think you can learn on the fly like I thought I could? Does learning on the fly how to properly use the Service Request escalation process when you already have critical issue sound like a good idea? My advice is: Invest your time now to learn how My Oracle Support can help you prevent issues on your systems. Learn how to find answers faster and resolve problems more efficiently. Understand how to properly complete a service request. Invest in Support training, offered at no additional cost to Oracle Premier Support customers. It will pay back quicker than you think. It will bring you more value than you think. Discover your advantage with Oracle Premier Support's Proactive Portfolio.

    Read the article

  • Howto WCF Service HTTPS Binding and Endpoint Configuration in IIS with Load Balancer?

    - by Mike G
    We have a WCF service that is being hosted on a set of 12 machines. There is a load balancer that is a gateway to these machines. Now the site is setup as SSL; as in a user accesses it through using an URL with https. I know this much, the URL that addresses the site is https, but none of the servers has a https binding or is setup to require SSL. This leads me to believe that the load balancer handles the https and the connection from the balancer to the servers are unencrypted (this takes place behind the firewall so no biggie there). The problem we're having is that when a Silverlight client tries to access a WCF service it is getting a "Not Found" error. I've set up a test site along with our developer machines and have made sure that the bindings and endpoints in the web.config work with the client. It seems to be the case in the production environment that we get this error. Is there anything wrong with the following web.config? Should we be setting up how https is handled in a different manner? We're at a loss on this currently since I've tried every programmatic solution with endpoints and bindings. None of the solutions I have found deal with a load balancer in the manner we're dealing. Web.config service model info: <system.serviceModel> <behaviors> <serviceBehaviors> <behavior name="TradePMR.OMS.Framework.Services.CRM.CRMServiceBehavior"> <serviceMetadata httpsGetEnabled="true" /> <serviceDebug includeExceptionDetailInFaults="false" /> </behavior> <behavior name="TradePMR.OMS.Framework.Services.AccountAggregation.AccountAggregationBehavior"> <serviceMetadata httpsGetEnabled="true" /> <serviceDebug includeExceptionDetailInFaults="false" /> </behavior> </serviceBehaviors> </behaviors> <bindings> <customBinding> <binding name="SecureCRMCustomBinding"> <binaryMessageEncoding /> <httpsTransport /> </binding> <binding name="SecureAACustomBinding"> <binaryMessageEncoding /> <httpsTransport /> </binding> </customBinding> <mexHttpsBinding> <binding name="SecureMex" /> </mexHttpsBinding> </bindings> <serviceHostingEnvironment aspNetCompatibilityEnabled="true" /> <!--Defines the services to be used in the application--> <services> <service behaviorConfiguration="TradePMR.OMS.Framework.Services.CRM.CRMServiceBehavior" name="TradePMR.OMS.Framework.Services.CRM.CRMService"> <endpoint address="" binding="customBinding" bindingConfiguration="SecureCRMCustomBinding" contract="TradePMR.OMS.Framework.Services.CRM.CRMService" name="SecureCRMEndpoint" /> <!--This is required in order to be able to use the "Update Service Reference" in the Silverlight application--> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" /> </service> <service behaviorConfiguration="TradePMR.OMS.Framework.Services.AccountAggregation.AccountAggregationBehavior" name="TradePMR.OMS.Framework.Services.AccountAggregation.AccountAggregation"> <endpoint address="" binding="customBinding" bindingConfiguration="SecureAACustomBinding" contract="TradePMR.OMS.Framework.Services.AccountAggregation.AccountAggregation" name="SecureAAEndpoint" /> <!--This is required in order to be able to use the "Update Service Reference" in the Silverlight application--> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" /> </service> </services> </system.serviceModel> </configuration> The ServiceReferences.ClientConfig looks like this: <configuration> <system.serviceModel> <bindings> <customBinding> <binding name="StandardAAEndpoint"> <binaryMessageEncoding /> <httpTransport maxReceivedMessageSize="2147483647" maxBufferSize="2147483647" /> </binding> <binding name="SecureAAEndpoint"> <binaryMessageEncoding /> <httpsTransport maxReceivedMessageSize="2147483647" maxBufferSize="2147483647" /> </binding> <binding name="StandardCRMEndpoint"> <binaryMessageEncoding /> <httpTransport maxReceivedMessageSize="2147483647" maxBufferSize="2147483647" /> </binding> <binding name="SecureCRMEndpoint"> <binaryMessageEncoding /> <httpsTransport maxReceivedMessageSize="2147483647" maxBufferSize="2147483647" /> </binding> </customBinding> </bindings> <client> <endpoint address="https://Service2.svc" binding="customBinding" bindingConfiguration="SecureAAEndpoint" contract="AccountAggregationService.AccountAggregation" name="SecureAAEndpoint" /> <endpoint address="https://Service1.svc" binding="customBinding" bindingConfiguration="SecureCRMEndpoint" contract="CRMService.CRMService" name="SecureCRMEndpoint" /> </client> </system.serviceModel> </configuration> (The addresses are of no consequence since those are dynamically built so that they will point to a dev's machine or to the production server)

    Read the article

  • Oracle Data Mining a Star Schema: Telco Churn Case Study

    - by charlie.berger
    There is a complete and detailed Telco Churn case study "How to" Blog Series just posted by Ari Mozes, ODM Dev. Manager.  In it, Ari provides detailed guidance in how to leverage various strengths of Oracle Data Mining including the ability to: mine Star Schemas and join tables and views together to obtain a complete 360 degree view of a customer combine transactional data e.g. call record detail (CDR) data, etc. define complex data transformation, model build and model deploy analytical methodologies inside the Database  His blog is posted in a multi-part series.  Below are some opening excerpts for the first 3 blog entries.  This is an excellent resource for any novice to skilled data miner who wants to gain competitive advantage by mining their data inside the Oracle Database.  Many thanks Ari! Mining a Star Schema: Telco Churn Case Study (1 of 3) One of the strengths of Oracle Data Mining is the ability to mine star schemas with minimal effort.  Star schemas are commonly used in relational databases, and they often contain rich data with interesting patterns.  While dimension tables may contain interesting demographics, fact tables will often contain user behavior, such as phone usage or purchase patterns.  Both of these aspects - demographics and usage patterns - can provide insight into behavior.Churn is a critical problem in the telecommunications industry, and companies go to great lengths to reduce the churn of their customer base.  One case study1 describes a telecommunications scenario involving understanding, and identification of, churn, where the underlying data is present in a star schema.  That case study is a good example for demonstrating just how natural it is for Oracle Data Mining to analyze a star schema, so it will be used as the basis for this series of posts...... Mining a Star Schema: Telco Churn Case Study (2 of 3) This post will follow the transformation steps as described in the case study, but will use Oracle SQL as the means for preparing data.  Please see the previous post for background material, including links to the case study and to scripts that can be used to replicate the stages in these posts.1) Handling missing values for call data recordsThe CDR_T table records the number of phone minutes used by a customer per month and per call type (tariff).  For example, the table may contain one record corresponding to the number of peak (call type) minutes in January for a specific customer, and another record associated with international calls in March for the same customer.  This table is likely to be fairly dense (most type-month combinations for a given customer will be present) due to the coarse level of aggregation, but there may be some missing values.  Missing entries may occur for a number of reasons: the customer made no calls of a particular type in a particular month, the customer switched providers during the timeframe, or perhaps there is a data entry problem.  In the first situation, the correct interpretation of a missing entry would be to assume that the number of minutes for the type-month combination is zero.  In the other situations, it is not appropriate to assume zero, but rather derive some representative value to replace the missing entries.  The referenced case study takes the latter approach.  The data is segmented by customer and call type, and within a given customer-call type combination, an average number of minutes is computed and used as a replacement value.In SQL, we need to generate additional rows for the missing entries and populate those rows with appropriate values.  To generate the missing rows, Oracle's partition outer join feature is a perfect fit.  select cust_id, cdre.tariff, cdre.month, minsfrom cdr_t cdr partition by (cust_id) right outer join     (select distinct tariff, month from cdr_t) cdre     on (cdr.month = cdre.month and cdr.tariff = cdre.tariff);   ....... Mining a Star Schema: Telco Churn Case Study (3 of 3) Now that the "difficult" work is complete - preparing the data - we can move to building a predictive model to help identify and understand churn.The case study suggests that separate models be built for different customer segments (high, medium, low, and very low value customer groups).  To reduce the data to a single segment, a filter can be applied: create or replace view churn_data_high asselect * from churn_prep where value_band = 'HIGH'; It is simple to take a quick look at the predictive aspects of the data on a univariate basis.  While this does not capture the more complex multi-variate effects as would occur with the full-blown data mining algorithms, it can give a quick feel as to the predictive aspects of the data as well as validate the data preparation steps.  Oracle Data Mining includes a predictive analytics package which enables quick analysis. begin  dbms_predictive_analytics.explain(   'churn_data_high','churn_m6','expl_churn_tab'); end; /select * from expl_churn_tab where rank <= 5 order by rank; ATTRIBUTE_NAME       ATTRIBUTE_SUBNAME EXPLANATORY_VALUE RANK-------------------- ----------------- ----------------- ----------LOS_BAND                                      .069167052          1MINS_PER_TARIFF_MON  PEAK-5                   .034881648          2REV_PER_MON          REV-5                    .034527798          3DROPPED_CALLS                                 .028110322          4MINS_PER_TARIFF_MON  PEAK-4                   .024698149          5From the above results, it is clear that some predictors do contain information to help identify churn (explanatory value > 0).  The strongest uni-variate predictor of churn appears to be the customer's (binned) length of service.  The second strongest churn indicator appears to be the number of peak minutes used in the most recent month.  The subname column contains the interior piece of the DM_NESTED_NUMERICALS column described in the previous post.  By using the object relational approach, many related predictors are included within a single top-level column. .....   NOTE:  These are just EXCERPTS.  Click here to start reading the Oracle Data Mining a Star Schema: Telco Churn Case Study from the beginning.    

    Read the article

  • How to make custom WCF error handler return JSON response with non-OK http code?

    - by John
    I'm implementing a RESTful web service using WCF and the WebHttpBinding. Currently I'm working on the error handling logic, implementing a custom error handler (IErrorHandler); the aim is to have it catch any uncaught exceptions thrown by operations and then return a JSON error object (including say an error code and error message - e.g. { "errorCode": 123, "errorMessage": "bla" }) back to the browser user along with an an HTTP code such as BadRequest, InteralServerError or whatever (anything other than 'OK' really). Here is the code I am using inside the ProvideFault method of my error handler: fault = Message.CreateMessage(version, "", errorObject, new DataContractJsonSerializer(typeof(ErrorMessage))); var wbf = new WebBodyFormatMessageProperty(WebContentFormat.Json); fault.Properties.Add(WebBodyFormatMessageProperty.Name, wbf); var rmp = new HttpResponseMessageProperty(); rmp.StatusCode = System.Net.HttpStatusCode.InternalServerError; rmp.Headers.Add(HttpRequestHeader.ContentType, "application/json"); fault.Properties.Add(HttpResponseMessageProperty.Name, rmp); -- This returns with Content-Type: application/json, however the status code is 'OK' instead of 'InternalServerError'. fault = Message.CreateMessage(version, "", errorObject, new DataContractJsonSerializer(typeof(ErrorMessage))); var wbf = new WebBodyFormatMessageProperty(WebContentFormat.Json); fault.Properties.Add(WebBodyFormatMessageProperty.Name, wbf); var rmp = new HttpResponseMessageProperty(); rmp.StatusCode = System.Net.HttpStatusCode.InternalServerError; //rmp.Headers.Add(HttpRequestHeader.ContentType, "application/json"); fault.Properties.Add(HttpResponseMessageProperty.Name, rmp); -- This returns with the correct status code, however the content-type is now XML. fault = Message.CreateMessage(version, "", errorObject, new DataContractJsonSerializer(typeof(ErrorMessage))); var wbf = new WebBodyFormatMessageProperty(WebContentFormat.Json); fault.Properties.Add(WebBodyFormatMessageProperty.Name, wbf); var response = WebOperationContext.Current.OutgoingResponse; response.ContentType = "application/json"; response.StatusCode = HttpStatusCode.InternalServerError; -- This returns with the correct status code and the correct content-type! The problem is that the http body now has the text 'Failed to load source for: http://localhost:7000/bla..' instead of the actual JSON data.. Any ideas? I'm considering using the last approach and just sticking the JSON in the HTTP StatusMessage header field instead of in the body, but this doesn't seem quite as nice?

    Read the article

  • IntentService android download and return file to Activity

    - by Andrew G
    I have a fairly tricky situation that I'm trying to determine the best design for. The basics are this: I'm designing a messaging system with a similar interface to email. When a user clicks a message that has an attachment, an activity is spawned that shows the text of that message along with a paper clip signaling that there is an additional attachment. At this point, I begin preloading the attachment so that when the user clicks on it - it loads more quickly. currently, when the user clicks the attachment, it prompts with a loading dialog until the download is complete at which point it loads a separate attachment viewer activity, passing in the bmp byte array. I don't ever want to save attachments to persistent storage. The difficulty I have is in supporting rotation as well as home button presses etc. The download is currently done with a thread and handler setup. Instead of this, I'd like the flow to be the following: User loads message as before, preloading begins of attachment as before (invisible to user). When the user clicks on the attachment link, the attachment viewer activity is spawned right away. If the download was done, the image is displayed. If not, a dialog is shown in THIS activity until it is done and can be displayed. Note that ideally the download never restarts or else I've wasted cycles on the preload. Obviously I need some persistent background process that is able to keep downloading and is able to call back to arbitrarily bonded Activities. It seems like the IntentService almost fits my needs as it does its work in a background thread and has the Service (non UI) lifecycle. However, will it work for my other needs? I notice that common implementations for what I want to do get a Messenger from the caller Activity so that a Message object can be sent back to a Handler in the caller's thread. This is all well and good but what happens in my case when the caller Activity is Stopped or Destroyed and the currently active Activity (the attachment viewer) is showing? Is there some way to dynamically bind a new Activity to a running IntentService so that I can send a Message back to the new Activity? The other question is on the Message object. Can I send arbitrarily large data back in this package? For instance, rather than send back that "The file was downloaded", I need to send back the byte array of the downloaded file itself since I never want to write it to disk (and yes this needs to be the case). Any advice on achieving the behavior I want is greatly appreciated. I've not been working with Android for that long and I often get confused with how to best handle asynchronous processes over the course of the Activity lifecycle especially when it comes to orientation changes and home button presses...

    Read the article

  • Windows Azure PowerShell for Node.js

    - by shiju
    The Windows Azure PowerShell for Node.js is a command-line tool that  allows the Node developers to build and deploy Node.js apps in Windows Azure using Windows PowerShell cmdlets. Using Windows Azure PowerShell for Node.js, you can develop, test, deploy and manage Node based hosted service in Windows Azure. For getting the PowerShell for Node.js, click All Programs, Windows Azure SDK Node.js and run  Windows Azure PowerShell for Node.js, as Administrator. The followings are the few PowerShell cmdlets that lets you to work with Node.js apps in Windows Azure Create New Hosted Service New-AzureService <HostedServiceName> The below cmdlet will created a Windows Aazure hosted service named NodeOnAzure in the folder C:\nodejs and this will also create ServiceConfiguration.Cloud.cscfg, ServiceConfiguration.Local.cscfg and ServiceDefinition.csdef and deploymentSettings.json files for the hosted service. PS C:\nodejs> New-AzureService NodeOnAzure The below picture shows the files after creating the hosted service Create Web Role Add-AzureNodeWebRole <RoleName> The following cmdlet will create a hosted service named MyNodeApp along with web.config file. PS C:\nodejs\NodeOnAzure> Add-AzureNodeWebRole MyNodeApp The below picture shows the files after creating the web role app. Install Node Module npm install <NodeModule> The following command will install Node Module Express onto your web role app. PS C:\nodejs\NodeOnAzure\MyNodeApp> npm install Express Run Windows Azure Apps Locally in the Emulator Start-AzureEmulator -launch The following cmdlet will create a local package and run Windows Azure app locally in the emulator PS C:\nodejs\NodeOnAzure\MyNodeApp> Start-AzureEmulator -launch Stop Windows Azure Emulator Stop-AzureEmulator The following cmdlet will stop your Windows Azure in the emulator. PS C:\nodejs\NodeOnAzure\MyNodeApp> Stop-AzureEmulator Download Windows Azure Publishing Settings Get-AzurePublishSettings The following cmdlet will redirect to Windows Azure portal where we can download Windows Azure publish settings PS C:\nodejs\NodeOnAzure\MyNodeApp> Get-AzurePublishSettings Import Windows Azure Publishing Settings Import-AzurePublishSettings <Location of .publishSettings file> The following cmdlet will import the publish settings file from the location c:\nodejs PS C:\nodejs\NodeOnAzure\MyNodeApp>  Import-AzurePublishSettings c:\nodejs\shijuvar.publishSettings Publish Apps to Windows Azure Publish-AzureService –name <Name> –location <Location of Data centre> The following cmdlet will publish the app to Windows Azure with name “NodeOnAzure” in the location Southeast Asia. Please keep in mind that the service name should be unique. PS C:\nodejs\NodeOnAzure\MyNodeApp> Publish-AzureService –name NodeonAzure –location "Southeast Asia” –launch Stop Windows Azure Service Stop-AzureService The following cmdlet will stop your service which you have deployed previously. PS C:\nodejs\NodeOnAzure\MyNodeApp> Stop-AzureService Remove Windows Azure Service Remove-AzureService The following cmdlet will remove your service from Windows Azure. PS C:\nodejs\NodeOnAzure\MyNodeApp> Remove-AzureService Quick Summary for PowerShell cmdlets Create  a new Hosted Service New-AzureService <HostedServiceName> Create a Web Role Add-AzureNodeWebRole <RoleName> Install Node Module npm install <NodeModule> Running Windows Azure Apps Locally in Emulator Start-AzureEmulator -launch Stop Windows Azure Emulator Stop-AzureEmulator Download Windows Azure Publishing Settings Get-AzurePublishSettings Import Windows Azure Publishing Settings Import-AzurePublishSettings <Location of .publishSettings file> Publish Apps to Windows Azure Publish-AzureService –name <Name> –location <Location of Data centre> Stop Windows Azure Service Stop-AzureService Remove Windows Azure Service Remove-AzureService

    Read the article

  • Now Available &ndash; Windows Azure SDK 1.6

    - by Shaun
    Microsoft has just announced the Windows Azure SDK 1.6 and the Windows Azure Tools for Visual Studio 1.6. Now people can download the latest product through the WebPI. After you downloaded and installed the SDK you will find that The SDK 1.6 can be stayed side by side with the SDK 1.5, which means you can still using the 1.5 assemblies. But the Visual Studio Tools would be upgraded to 1.6. Different from the previous SDK, in this version it includes 4 components: Windows Azure Authoring Tools, Windows Azure Emulators, Windows Azure Libraries for .NET 1.6 and the Windows Azure Tools for Microsoft Visual Studio 2010. There are some significant upgrades in this version, which are Publishing Enhancement: More easily connect to the Windows Azure when publish your application by retrieving a publish setting file. It will let you configure some settings of the deployment, without getting back to the developer portal. Multi-profiles: The publish settings, cloud configuration files, etc. will be stored in one or more MSBuild files. It will be much easier to switch the settings between vary build environments. MSBuild Command-line Build Support. In-Place Upgrade Support.   Publishing Enhancement So let’s have a look about the new features of the publishing. Just create a new Windows Azure project in Visual Studio 2010 with a MVC 3 Web Role, and right-click the Windows Azure project node in the solution explorer, then select Publish, we will find the new publish dialog. In this version the first thing we need to do is to connect to our Windows Azure subscription. Click the “Sign in to download credentials” link, we will be navigated to the login page to provide the Live ID. The Windows Azure Tool will generate a certificate file and uploaded to the subscriptions those belong to us. Then we will download a PUBLISHSETTINGS file, which contains the credentials and subscriptions information. The Visual Studio Tool will generate a certificate and deployed to the subscriptions you have as the Management Certificate. The VS Tool will use this certificate to connect to the subscription in the next step. In the next step, I would back to the Visual Studio (the publish dialog should be stilling opened) and click the Import button, select the PUBLISHSETTINGS file I had just downloaded. Then all my subscriptions will be shown in the dropdown list. Select a subscription that I want the application to be published and press the Next button, then we can select the hosted service, environment, build configuration and service configuration shown in the dialog. In this version we can create a new hosted service directly here rather than go back to the developer portal. Just select the <Create New …> item in the hosted service. What we need to do is to provide the hosted service name and the location. Once clicked the OK, after several seconds the hosted service will be established. If we went to the developer portal we will find the new hosted service in my subscription. a) Currently we cannot select the Affinity Group when create a new hosted service through the Visual Studio Publish dialog. b) Although we can specify the hosted service name and DNS prefixing through the developer portal, we cannot do so from the VS Tool, which means the DNS prefixing would be the same as what we specified for the hosted service name. For example, we specified our hosted service name as “Sdk16Demo”, so the public URL would be http://sdk16demo.cloudapp.net/. After created a new hosted service we can select the cloud environment (production or staging), the build configuration (release or debug), and the service configuration (cloud or local). And we can set the Remote Desktop by check the related checkbox as well. One thing should be note is that, in this version when we set the Remote Desktop settings we don’t need to specify a certificate by default. This is because the Visual Studio will generate a new certificate for us by default. But we can still specify an existing certificate for RDC, by clicking the “More Options” button. Visual Studio Tool will create another certificate for the Remote Desktop connection. It will NOT use the certificate that managing the subscription. We also can select the “Advanced Settings” page to specify the deployment label, storage account, IntelliTrace and .NET profiling information, etc.. Press Next button, the dialog will display all settings I had just specified and it will save them as a new profile. The last step is to click the Publish button. Since we enabled the Remote Desktop feature, the first step of publishing was uploading the certificate. And then it will verify the storage account we specified and upload the package, then finally created the website in Windows Azure.   Multi-Profiles After published, if we back to the Visual Studio we can find a AZUREPUBXML file under the Profiles folder in the Azure project. It includes all settings we specified before. If we publish this project again, we can just use the current settings (hosted service, environment, RDC, etc.) from this profile without input them again. And this is very useful when we have more than one deployment settings. For example it would be able to have one AZUREPUBXML profile for deploying to testing environment (debug building, less roles with RDC and IntelliTrace) and one for production (release building, more roles but without IntelliTrace).   In-Place Upgrade Support Let’s change some codes in the MVC pages and click the Publish menu from the azure project node. No need to specify any settings,  here we can use the pervious settings by loading the azure profile file (AZUREPUBXML). After clicked the Publish button the VS Tool brought a dialog to us to indicate that there’s a deployment available in the hosted service environment, and prompt to REPLACE it or not. Notice that in this version, the dialog tool said “replace” rather than “delete”, which means by default the VS Tool will use In-Place Upgrade when we deploy to a hosted service that has a deployment already exist. After click Yes the VS Tool will upload the package and perform the In-Place Upgrade. If we back to the developer portal we can find that the status of the hosted service was turned to “Updating…”. But in the previous SDK, it will try to delete the whole deployment and publish a new one.   Summary When the Microsoft announced the features that allows the changing VM size via In-Place Upgrade, they also mentioned that in the next few versions the user experience of publishing the azure application would be improved. The target was trying to accomplish the whole publish experience in Visual Studio, which means no need to touch developer portal any more. In the SDK 1.6 we can see from the new publish dialog, as a developer we can do the whole process, includes creating hosted service, specifying the environment, configuration, remote desktop, etc. values without going back the the developer portal.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Windows Azure Evolution &ndash; TFS Integration (WAWS Part 2)

    - by Shaun
    So this is the fourth blog post about the new features of Windows Azure and the second part of Windows Azure Web Sites. But this is not just focus on the WAWS since the function I’m going to introduce is available in both Windows Azure Web Sites and Windows Azure Cloud Service (a.k.a. hosted service). In the previous post I talked about the Windows Azure Web Sites and how to use its gallery to build a WordPress personal blog without coding. Besides the gallery we can create an empty web site and upload our website from vary approaches. And one of the highlighted feature here is that, we can make our web site integrated with a source control service, such as TFS and Git, so that it will be deployed automatically once a new commit or build available.   Create New Empty Web Site In the developer portal when creating a new web site, we can select QUICK CREATE item. This will create an empty web site with only one shared instance without any database associated. Let’s specify the URL, region and subscription and click OK. After a few seconds our website will be ready. And now we can click the BROWSE button to open this empty website. As you can see there is a welcome page available in my website even thought I didn’t upload or deploy anything. This means even though the website will be charged even before anything was deployed, similar as the cloud service (hosted service). It is because once we created a website, Windows Azure platform had arranged a hosting process (w3wp.exe) in the group of virtual machines.   Create Project in TFS Preview Service and Setup Link Currently the Windows Azure Web Sites can integrate with TFS and Git as its deployment source, and it only support the Microsoft TFS Preview Service for now. I will not deep into how to use the TFS preview service in this post but once we click into the website we had just created and then clicked the “Set up TFS publishing”, there will be a dialog helping us to connect to this service. If you don’t have an account you can click the link shown below to request one. Assuming we have already had an account of TFS service then we need to create a new project firstly. Go to your TFS service website and create a new project, giving the project name, description and the process template. Then, back to the developer portal and clicked the “Set up TFS publishing” link. In the popping up window I will provide my TFS service URL and click the “Authorize now” link. Click “Accept” button to allow my windows azure to connect to my TFS service. Then it will be back to the developer portal and list all projects in my account. Just select the one I had just created and click OK. Then our website is linking to the TFS project I specified and finally it will show similar like this below. This means the web site had been linked to the TFS successfully.   Work with TFS Preview Service in VS2010 In the figure above there are some links to guide us how to connect to the TFS server through Visual Studio 2010 and 2012 RC. If you are using Visual Studio 2012 RC, you don’t need any extension. But if you are using Visual Studio 2010 you must have SP1 and KB2581206 installed. To connect to my TFS service just open the Visual Studio and in the Team Explorer, we can add a new TFS server and paste the URL of my TFS service from the developer portal. And select the project I had just created, then it will be listed in my Team Explorer. Now let’s start to build our website. Since the website we are going to build will be deployed to WAWS, it’s NOT a cloud service, NOT a web role. So in this case we need to create a normal ASP.NET web application. For example, an ASP.NET MVC 3 web application. Next, right click on the solution and select “Add Solution to Source Control”, select the project I had just created. Then check my code in. Once the check-in finished we can see that there is a build running in the TFS server. And if we back to the developer portal, we will see in our web site deployment page there’s a deployment running. In fact, once we linked our web site to our TFS then it will create a new build definition in our TFS project. It will be triggered by each check-in and deploy to the web site we linked automatically. So that when our code had been compiled it will be published to our web site from our TFS server. Once the build and deployment finished we can see it’s now active on our developer portal. Now we can see the web site that created from my Visual Studio and deployed by my TFS.   Continue Deployment through VS and TFS A big benefit when using TFS publishing is the continue deployment. Now if I changed some code in my Visual Studio, for example update some text on the home page and check in my changes, then it will trigger an new build and deploy to my WAWS automatically. And even more, if we wanted to rollback to a previous version we can just select an existing deployment listed in the portal and click REDEPLOY at the bottom.   Q&A: Can Web Site use Storage work with a Worker Role? Stacy asked a question in my previous post, which was “can a web site use Windows Azure Storage and furthermore working with a worker role”. Since the web site is deployed on the windows azure virtual machines in data center, it must be able to use all windows azure features such as the storage, SQL databases, CDN, etc.. But since when using web site we normally have a standard ASP.NET web application, PHP website or NodeJS, the windows azure SDK was not referenced by default. But we can add them by ourselves. In our sample project let’s right click on my MVC project and clicked the “Manage NuGet packages”. And in the dialog I will search windows azure packages and select the “Windows Azure Storage” to install. Then we will have the assemblies to access windows azure storage such as tables, queues and blobs. Since I have a storage account already, let’s have a quick demo, just to list all blobs in a container. The code would be like this. 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Web; 5: using System.Web.Mvc; 6: using Microsoft.WindowsAzure; 7: using Microsoft.WindowsAzure.StorageClient; 8:  9: namespace WAASTFSDemo.Controllers 10: { 11: public class HomeController : Controller 12: { 13: public ActionResult Index() 14: { 15: ViewBag.Message = "Welcome to Windows Azure!"; 16:  17: var credentials = new StorageCredentialsAccountAndKey("[STORAGE_ACCOUNT]", "[STORAGE_KEY]"); 18: var account = new CloudStorageAccount(credentials, false); 19: var client = account.CreateCloudBlobClient(); 20: var container = client.GetContainerReference("shared"); 21: ViewBag.Blobs = container.ListBlobs().Select(b => b.Uri.AbsoluteUri); 22:  23: return View(); 24: } 25:  26: public ActionResult About() 27: { 28: return View(); 29: } 30: } 31: } 1: @{ 2: ViewBag.Title = "Home Page"; 3: } 4:  5: <h2>@ViewBag.Message</h2> 6: <p> 7: To learn more about ASP.NET MVC visit <a href="http://asp.net/mvc" title="ASP.NET MVC Website">http://asp.net/mvc</a>. 8: </p> 9: <div> 10: <ul> 11: @foreach (var blob in ViewBag.Blobs) 12: { 13: <li>@blob</li> 14: } 15: </ul> 16: </div> And then just check in the code, it will be deployed to my web site. Finally we can see the blobs in my storage.   This is just an example but it proves that web sites can connect to storage, table, blob and queue as well. So the answer to Stacy should be “yes”. The web site can use queue storage to work with worker role.   Summary In this post I demonstrated how to integrate with TFS from Windows Azure Web Sites. You can see our website can be built, uploaded and deployed automatically by TFS service. All we need to do is to provide the TFS name and select the project. Not only the Windows Azure Web Site, in this upgrade the Windows Azure Cloud Services (hosted service) can be published through TFS as well. Very similar as what we have shown below. But currently, only Microsoft TFS Service Preview can be integrated with Windows Azure. But I think in the future we can link the TFS in our enterprise and some 3rd party TFS such as CodePlex to Windows Azure.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Windows Azure Learning Plan - Application Fabric

    - by BuckWoody
    This is one in a series of posts on a Windows Azure Learning Plan. You can find the main post here. This one deals with the Application Fabric for Windows Azure. It serves three main purposes - Access Control, Caching, and as a Service Bus.   Overview and Training Overview and general  information about the Azure Application Fabric, - what it is, how it works, and where you can learn more. General Introduction and Overview http://msdn.microsoft.com/en-us/library/ee922714.aspx Access Control Service Overview http://msdn.microsoft.com/en-us/magazine/gg490345.aspx Microsoft Documentation http://msdn.microsoft.com/en-gb/windowsazure/netservices.aspx Learning and Examples Sources for online and other Azure Appllications Fabric training Application Fabric SDK http://www.microsoft.com/downloads/en/details.aspx?FamilyID=39856a03-1490-4283-908f-c8bf0bfad8a5&displaylang=en Application Fabric Caching Service Primer http://blogs.msdn.com/b/appfabriccat/archive/2010/11/29/azure-appfabric-caching-service-soup-to-nuts-primer.aspx?wa=wsignin1.0 Hands-On Lab: Building Windows Azure Applications with the Caching Service http://www.wadewegner.com/2010/11/hands-on-lab-building-windows-azure-applications-with-the-caching-service/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+WadeWegner+%28Wade+Wegner+-+Technical%29 Architecture  Azure Application Fabric Internals and Architectures for Scale Out and other use-cases. Azure Application Fabric Architecture Guide http://blogs.msdn.com/b/yasserabdelkader/archive/2010/09/12/release-of-windows-server-appfabric-architecture-guide.aspx Windows Azure AppFabric Service Bus - A Deep Dive (Video) http://www.msteched.com/2010/Europe/ASI410 Access Control Service (ACS) High Level Architecture http://blogs.msdn.com/b/alikl/archive/2010/09/28/azure-appfabric-access-control-service-acs-v-2-0-high-level-architecture-web-application-scenario.aspx Applications  and Programming Programming Patterns and Architectures for SQL Azure systems. Various Examples from PDC 2010 on using Azure Application as a Service Bus http://tinyurl.com/2dcnt8o Creating a Distributed Cache using the Application Fabric http://blog.structuretoobig.com/post/2010/08/31/Creating-a-Poor-Mane28099s-Distributed-Cache-in-Azure.aspx  Azure Application Fabric Java SDK http://jdotnetservices.com/

    Read the article

< Previous Page | 154 155 156 157 158 159 160 161 162 163 164 165  | Next Page >