Search Results

Search found 20313 results on 813 pages for 'batch size'.

Page 130/813 | < Previous Page | 126 127 128 129 130 131 132 133 134 135 136 137  | Next Page >

  • Some issue with bufferedReader

    - by thetna
    I have a java function as follows: public HashMap<String, ArrayList<Double>> embedWords(BufferedReader buffR1 { ArrayList<String > arrayList = new ArrayList<String>(); arrayList = getWords(buffR1); System.out.println("Word size:"+ arrayList.size()); ArrayList<ArrayList<Double>> arrList = getWordFeature(buffR1); System.out.println("Size of arrList:embedWords:"+arrList.size()); } Here , the problem is , the both of the function getWords and getWordFeatures can't give the size value. When i comment function getWords the function getWordFeature returns non-zero value .But when uncommented , the output is as follows: Word size:15055 Size of arrList:embedWords: 0

    Read the article

  • SQL SELECT multiple INNER JOINs

    - by Noam Smadja
    The SELECT statement includes a reserved word or an argument name that is misspelled or missing, or the punctuation is incorrect its Access database.. i have a Library table, where Autnm Topic Size Cover Lang are foreign Keys each record is actually a book which has its properties such as author and stuff. i am not quite sure i am even using the correct JOIN.. quite new with "complex" SQL :) SELECT Library.Bknm_Hebrew, Library.Bknm_English, Library.Bknm_Russian, Library.Note, Library.ISBN, Library.Pages, Library.PUSD, Author.ID AS [AuthorID], Author.Author_hebrew AS [AuthorHebrew], Author.Author_English AS [AuthorEnglish], Author.Author_Russian AS [AuthorRussian], Topic.ID AS [TopicID], Topic.Topic_Hebrew AS [TopicHebrew], Topic.Topic_English AS [TopicEnglish], Topic.Topic_Russian AS [TopicRussian], Size.Size AS [Size], Cover.ID AS [TopicID], Cover.Cvrtyp_Hebrew AS [CoverHebrew], Cover.Cvrtyp_English AS [TopicEnglish], Cover.Cvrtyp_Russian AS [CoverRussian], Lang.ID AS [LangID], Lang.Lang_Hebrew AS [LangHebrew], Lang.Lang_English AS [LangEnglish], FROM Library INNER JOIN Author ON Library.Autnm = Author.ID INNER JOIN Topic ON Library.Topic = Topic.ID INNER JOIN Size ON Library.Size = Size.ID INNER JOIN Cover ON Library.Cover = Cover.ID INNER JOIN Lang ON Library.Lang = Lang.ID Thx in advance

    Read the article

  • The maximum message size quota for incoming messages (65536) has been exceeded.

    - by DaleyKD
    My WCF Service has an OperationContract that accepts, as a parameter, an array of objects. This can potentially be quite large. After looking for fixes for Bad Request: 400, I found the real reason: the maximum message size. I know this question has been asked before in MANY places. I've tried what everyone says: "Increase the sizes in the client and server config files." I have. It still doesn't work. My Service's web.config: <system.serviceModel> <services> <service name="myService"> <endpoint name="myEndpoint" address="" binding="basicHttpBinding" bindingConfiguration="myBinding" contract="Meisel.WCF.PDFDocs.IPDFDocsService" /> </service> </services> <bindings> <basicHttpBinding> <binding name="myBinding" closeTimeout="00:11:00" openTimeout="00:11:00" receiveTimeout="00:15:00" sendTimeout="00:15:00" maxBufferSize="2147483647" maxReceivedMessageSize="2147483647" maxBufferPoolSize="2147483647" transferMode="Buffered" allowCookies="false" bypassProxyOnLocal="false" hostNameComparisonMode="StrongWildcard" messageEncoding="Text" textEncoding="utf-8" useDefaultWebProxy="true"> <readerQuotas maxDepth="2147483647" maxStringContentLength="2147483647" maxArrayLength="2147483647" maxBytesPerRead="2147483647" maxNameTableCharCount="2147483647" /> <security mode="None" /> </binding> </basicHttpBinding> </bindings> <behaviors> <serviceBehaviors> <behavior> <serviceMetadata httpGetEnabled="true" /> <serviceDebug includeExceptionDetailInFaults="true" /> <dataContractSerializer maxItemsInObjectGraph="2147483647" /> </behavior> </serviceBehaviors> </behaviors> <serviceHostingEnvironment multipleSiteBindingsEnabled="true" /> </system.serviceModel> My Client's app.config: <system.serviceModel> <bindings> <basicHttpBinding> <binding name="BasicHttpBinding_IPDFDocsService" closeTimeout="00:11:00" openTimeout="00:11:00" receiveTimeout="00:10:00" sendTimeout="00:11:00" allowCookies="false" bypassProxyOnLocal="false" hostNameComparisonMode="StrongWildcard" maxBufferSize="2147483647" maxBufferPoolSize="2147483647" maxReceivedMessageSize="2147483647" messageEncoding="Text" textEncoding="utf-8" transferMode="Buffered" useDefaultWebProxy="true"> <readerQuotas maxDepth="32" maxStringContentLength="2147483647" maxArrayLength="2147483647" maxBytesPerRead="2147483647" maxNameTableCharCount="2147483647" /> <security mode="None"> <transport clientCredentialType="None" proxyCredentialType="None" realm="" /> <message clientCredentialType="UserName" algorithmSuite="Default" /> </security> </binding> </basicHttpBinding> </bindings> <client> <endpoint address="http://localhost:8451/PDFDocsService.svc" behaviorConfiguration="MoreItemsInObjectGraph" binding="basicHttpBinding" bindingConfiguration="BasicHttpBinding_IPDFDocsService" contract="PDFDocsService.IPDFDocsService" name="BasicHttpBinding_IPDFDocsService" /> </client> <behaviors> <endpointBehaviors> <behavior name="MoreItemsInObjectGraph"> <dataContractSerializer maxItemsInObjectGraph="2147483647" /> </behavior> </endpointBehaviors> </behaviors> </system.serviceModel> What can I possibly be missing or doing wrong? It's as though the service is ignoring what I typed in the maxReceivedBufferSize. Thanks in advance, Kyle UPDATE Here are two other StackOverflow questions where they never received an answer, either: http://stackoverflow.com/questions/2880623/maxreceivedmessagesize-adjusted-but-still-getting-the-quotaexceedexception-with http://stackoverflow.com/questions/2569715/wcf-maxreceivedmessagesize-property-not-taking

    Read the article

  • How to access GNU Xnee

    - by Gaurav Butola
    I have installed GNU Xnee (Gnee an OS X automator alternative) from the Software Centre but now I cant find it anywhere in the menus. Here is the output when I run gnee in the terminal gaurav@gaurav-HCL-ME-Laptop:~$ gnee (gnee:6864): Gtk-WARNING **: GtkSpinButton: setting an adjustment with non-zero page size is deprecated (gnee:6864): Gtk-WARNING **: GtkSpinButton: setting an adjustment with non-zero page size is deprecated (gnee:6864): Gtk-WARNING **: GtkSpinButton: setting an adjustment with non-zero page size is deprecated (gnee:6864): Gtk-WARNING **: GtkSpinButton: setting an adjustment with non-zero page size is deprecated (gnee:6864): Gtk-WARNING **: GtkSpinButton: setting an adjustment with non-zero page size is deprecated (gnee:6864): Gtk-WARNING **: GtkSpinButton: setting an adjustment with non-zero page size is deprecated (gnee:6864): Gtk-WARNING **: GtkSpinButton: setting an adjustment with non-zero page size is deprecated (gnee:6864): Gtk-WARNING **: GtkSpinButton: setting an adjustment with non-zero page size is deprecated (gnee:6864): Gtk-WARNING **: GtkSpinButton: setting an adjustment with non-zero page size is deprecated (gnee:6864): Gtk-WARNING **: GtkSpinButton: setting an adjustment with non-zero page size is deprecated *** glibc detected *** gnee: free(): invalid next size (fast): 0x08afb638 *** ======= Backtrace: ========= /lib/libc.so.6(+0x6c501)[0x53de501] /lib/libc.so.6(+0x6dd70)[0x53dfd70] /lib/libc.so.6(cfree+0x6d)[0x53e2e5d] gnee[0x804c9f5] /lib/libc.so.6(__libc_start_main+0xe7)[0x5388ce7] gnee[0x804c571] ======= Memory map: ======== 00110000-00112000 r-xp 00000000 08:01 2755679 /usr/lib/libgmodule-2.0.so.0.2600.0 00112000-00113000 r--p 00002000 08:01 2755679 /usr/lib/libgmodule-2.0.so.0.2600.0 00113000-00114000 rw-p 00003000 08:01 2755679 /usr/lib/libgmodule-2.0.so.0.2600.0 00116000-0011a000 r-xp 00000000 08:01 2755370 /usr/lib/libXtst.so.6.1.0 0011a000-0011b000 r--p 00003000 08:01 2755370 /usr/lib/libXtst.so.6.1.0 0011b000-0011c000 rw-p 00004000 08:01 2755370 /usr/lib/libXtst.so.6.1.0 0011c000-00176000 r-xp 00000000 08:01 2755432 /usr/lib/libbonoboui-2.so.0.0.0 00176000-00177000 r--p 00059000 08:01 2755432 /usr/lib/libbonoboui-2.so.0.0.0 00177000-00179000 rw-p 0005a000 08:01 2755432 /usr/lib/libbonoboui-2.so.0.0.0 00179000-001c8000 r-xp 00000000 08:01 2755428 /usr/lib/libbonobo-2.so.0.0.0 001c8000-001c9000 ---p 0004f000 08:01 2755428 /usr/lib/libbonobo-2.so.0.0.0 001c9000-001cc000 r--p 0004f000 08:01 2755428 /usr/lib/libbonobo-2.so.0.0.0 001cc000-001d3000 rw-p 00052000 08:01 2755428 /usr/lib/libbonobo-2.so.0.0.0 001d3000-00200000 r-xp 00000000 08:01 2754521 /usr/lib/libgconf-2.so.4.1.5 00200000-00201000 ---p 0002d000 08:01 2754521 /usr/lib/libgconf-2.so.4.1.5 00201000-00202000 r--p 0002d000 08:01 2754521 /usr/lib/libgconf-2.so.4.1.5 00202000-00204000 rw-p 0002e000 08:01 2754521 /usr/lib/libgconf-2.so.4.1.5 00204000-0021c000 r-xp 00000000 08:01 2755405 /usr/lib/libatk-1.0.so.0.3209.1 0021c000-0021d000 ---p 00018000 08:01 2755405 /usr/lib/libatk-1.0.so.0.3209.1 0021d000-0021e000 r--p 00018000 08:01 2755405 /usr/lib/libatk-1.0.so.0.3209.1 0021e000-0021f000 rw-p 00019000 08:01 2755405 /usr/lib/libatk-1.0.so.0.3209.1 0021f000-00243000 r-xp 00000000 08:01 2756035 /usr/lib/libpangoft2-1.0.so.0.2800.1 00243000-00244000 r--p 00023000 08:01 2756035 /usr/lib/libpangoft2-1.0.so.0.2800.1 00244000-00245000 rw-p 00024000 08:01 2756035 /usr/lib/libpangoft2-1.0.so.0.2800.1 00245000-00248000 r-xp 00000000 08:01 393403 /lib/libuuid.so.1.3.0 00248000-00249000 r--p 00002000 08:01 393403 /lib/libuuid.so.1.3.0 00249000-0024a000 rw-p 00003000 08:01 393403 /lib/libuuid.so.1.3.0 0024a000-0024c000 r-xp 00000000 08:01 2755415 /usr/lib/libavahi-glib.so.1.0.2 0024c000-0024d000 r--p 00001000 08:01 2755415 /usr/lib/libavahi-glib.so.1.0.2 0024d000-0024e000 rw-p 00002000 08:01 2755415 /usr/lib/libavahi-glib.so.1.0.2 0024e000-00250000 r-xp 00000000 08:01 393661 /lib/libutil-2.12.1.so 00250000-00251000 r--p 00001000 08:01 393661 /lib/libutil-2.12.1.so 00251000-00252000 rw-p 00002000 08:01 393661 /lib/libutil-2.12.1.so 00254000-00255000 r-xp 00000000 00:00 0 [vdso] 00255000-0026c000 r-xp 00000000 08:01 2755647 /usr/lib/libgdk_pixbuf-2.0.so.0.2200.0 0026c000-0026d000 r--p 00017000 08:01 2755647 /usr/lib/libgdk_pixbuf-2.0.so.0.2200.0 0026d000-0026e000 rw-p 00018000 08:01 2755647 /usr/lib/libgdk_pixbuf-2.0.so.0.2200.0 0026e000-002ad000 r-xp 00000000 08:01 2756031 /usr/lib/libpango-1.0.so.0.2800.1 002ad000-002ae000 ---p 0003f000 08:01 2756031 /usr/lib/libpango-1.0.so.0.2800.1 002ae000-002af000 r--p 0003f000 08:01 2756031 /usr/lib/libpango-1.0.so.0.2800.1 002af000-002b0000 rw-p 00040000 08:01 2756031 /usr/lib/libpango-1.0.so.0.2800.1 002b0000-002be000 r-xp 00000000 08:01 2755342 /usr/lib/libXext.so.6.4.0 002be000-002bf000 r--p 0000d000 08:01 2755342 /usr/lib/libXext.so.6.4.0 002bf000-002c0000 rw-p 0000e000 08:01 2755342 /usr/lib/libXext.so.6.4.0 002c0000-002c4000 r-xp 00000000 08:01 2755317 /usr/lib/libORBitCosNaming-2.so.0.1.0 002c4000-002c5000 r--p 00003000 08:01 2755317 /usr/lib/libORBitCosNaming-2.so.0.1.0 002c5000-002c6000 rw-p 00004000 08:01 2755317 /usr/lib/libORBitCosNaming-2.so.0.1.0 002c7000-002d9000 r-xp 00000000 08:01 2755430 /usr/lib/libbonobo-activation.so.4.0.0 002d9000-002da000 r--p 00012000 08:01 2755430 /usr/lib/libbonobo-activation.so.4.0.0 002da000-002db000 rw-p 00013000 08:01 2755430 /usr/lib/libbonobo-activation.so.4.0.0 002db000-002dc000 rw-p 00000000 00:00 0 002dc000-00370000 r-xp 00000000 08:01 2755645 /usr/lib/libgdk-x11-2.0.so.0.2200.0 00370000-00372000 r--p 00094000 08:01 2755645 /usr/lib/libgdk-x11-2.0.so.0.2200.0 00372000-00373000 rw-p 00096000 08:01 2755645 /usr/lib/libgdk-x11-2.0.so.0.2200.0 00373000-0038d000 r-xp 00000000 08:01 2755689 /usr/lib/libgnome-keyring.so.0.1.1 0038d000-0038e000 r--p 00019000 08:01 2755689 /usr/lib/libgnome-keyring.so.0.1.1 0038e000-0038f000 rw-p 0001a000 08:01 2755689 /usr/lib/libgnome-keyring.so.0.1.1 0038f000-00395000 r-xp 00000000 08:01 2755619 /usr/lib/libgailutil.so.18.0.1 00395000-00396000 r--p 00005000 08:01 2755619 /usr/lib/libgailutil.so.18.0.1 00396000-00397000 rw-p 00006000 08:01 2755619 /usr/lib/libgailutil.so.18.0.1 00397000-003ac000 r-xp 00000000 08:01 2755300 /usr/lib/libICE.so.6.3.0 003ac000-003ad000 r--p 00014000 08:01 2755300 /usr/lib/libICE.so.6.3.0 003ad000-003ae000 rw-p 00015000 08:01 2755300 /usr/lib/libICE.so.6.3.0 003ae000-003b0000 rw-p 00000000 00:00 0 003b0000-003f0000 r-xp 00000000 08:01 2755715 /usr/lib/libgobject-2.0.so.0.2600.0 003f0000-003f1000 r--p 00040000 08:01 2755715 /usr/lib/libgobject-2.0.so.0.2600.0 003f1000-003f2000 rw-p 00041000 08:01 2755715 /usr/lib/libgobject-2.0.so.0.2600.0 003f2000-0040f000 r-xp 00000000 08:01 2755524 /usr/lib/libdbus-glib-1.so.2.1.0 0040f000-00410000 r--p 0001c000 08:01 2755524 /usr/lib/libdbus-glib-1.so.2.1.0 00410000-00411000 rw-p 0001d000 08:01 2755524 /usr/lib/libdbus-glib-1.so.2.1.0 00411000-00413000 r-xp 00000000 08:01 2755352 /usr/lib/libXinerama.so.1.0.0 00413000-00414000 r--p 00001000 08:01 2755352 /usr/lib/libXinerama.so.1.0.0 00414000-00415000 rw-p 00002000 08:01 2755352 /usr/lib/libXinerama.so.1.0.0 00416000-0045f000 r-xp 00000000 08:01 2755313 /usr/lib/libORBit-2.so.0.1.0 0045f000-00467000 r--p 00049000 08:01 2755313 /usr/lib/libORBit-2.so.0.1.0 00467000-00469000 rw-p 00051000 08:01 2755313 /usr/lib/libORBit-2.so.0.1.0 00469000-00551000 r-xp 00000000 08:01 2755661 /usr/lib/libgio-2.0.so.0.2600.0 00551000-00553000 r--p 000e7000 08:01 2755661 /usr/lib/libgio-2.0.so.0.2600.0 00553000-00554000 rw-p 000e9000 08:01 2755661 /usr/lib/libgio-2.0.so.0.2600.0 00554000-00555000 rw-p 00000000 00:00 0 00555000-00578000 r-xp 00000000 08:01 393365 /lib/libpng12.so.0.44.0 00578000-00579000 r--p 00022000 08:01 393365 /lib/libpng12.so.0.44.0 00579000-0057a000 rw-p 00023000 08:01 393365 /lib/libpng12.so.0.44.0 0057d000-0057f000 r-xp 00000000 08:01 393656 /lib/libdl-2.12.1.so 0057f000-00580000 r--p 00001000 08:01 393656 /lib/libdl-2.12.1.soAborted

    Read the article

  • How do I mount a "DiskSecure Multiboot" partition?

    - by ????
    For a hard drive that has 4 or 5 partitions, I was able to mount one of them using Ubuntu LiveCD: sudo mount /dev/sda1 /mnt but is there a way to mount to the other partitions? (if using sudo fdisk -l, it only shows /dev/sda) GParted's snapshot is: Right now, the fdisk info is as follows: ubuntu@ubuntu:~$ sudo fdisk -l /dev/sda Disk /dev/sda: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x1aca8ea5 Device Boot Start End Blocks Id System /dev/sda1 284993226 350602558 32804666+ 7 HPFS/NTFS/exFAT and then ubuntu@ubuntu:/mnt$ sudo fdisk -l /dev/sda1 Disk /dev/sda1: 33.6 GB, 33591978496 bytes 255 heads, 63 sectors/track, 4083 cylinders, total 65609333 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x2052474d This doesn't look like a partition table Probably you selected the wrong device. Device Boot Start End Blocks Id System /dev/sda1p1 ? 6579571 1924427647 958924038+ 70 DiskSecure Multi-Boot /dev/sda1p2 ? 1953251627 3771827541 909287957+ 43 Unknown /dev/sda1p3 ? 225735265 225735274 5 72 Unknown /dev/sda1p4 2642411520 2642463409 25945 0 Empty Partition table entries are not in disk order Per @lgarzo's request, parted info is: ubuntu@ubuntu:/mnt$ sudo parted /dev/sda print Model: ATA ST3320820AS (scsi) Disk /dev/sda: 320GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 146GB 180GB 33.6GB primary ntfs boot The command sudo mount /dev/sda1p2 /mnt won't work.

    Read the article

  • Oracle OpenWorld Preview: Oracle WebCenter Sessions You Won’t Want to Miss

    - by Christie Flanagan
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} The beginning of Oracle OpenWorld is only a few short days away. This week on the WebCenter blog, we’ll focus in on the sessions you definitely don’t want to miss while you’re in San Francisco next week.  Monday, October 1 will be a day focused on strategy.  Here are the sessions you want to add to your calendar: CON8268 - Oracle WebCenter Strategy: Engaging Your Customers. Empowering Your Business Monday, Oct 1, 10:45 AM - 11:45 AM - Moscone West – 3001 Start things off with Oracle WebCenter’s Christian Finn, Senior Director of Evangelism and Roel Stalman, VP of Product Management to learn more about the Oracle WebCenter strategy, and to understand where Oracle is taking the platform to help companies engage, customers, empower employees, and enable partners. This session will also feature Richard Backx, Business IT Architect/Consultant, for the Dutch telecom, KPN. Richard has played a key role in the roll-out of WebCenter products for KPN’s multibrand portals with a specific focus on creating the best customer journey platform for all the company’s digital channels. Business success starts with ensuring that everyone is engaged with the right people and the right information and can access what they need through the channel of their choice—web, mobile, or social. Are you giving customers, employees, and partners the best-possible experience? Come learn how you can! Dig deeper into WebCenter’s strategy for its ECM, portal, web experience management and social collaboration in the following sessions: CON8270 - Oracle WebCenter Content Strategy and Vision Monday, Oct 1, 12:15 PM - 1:15 PM - Moscone West – 3001 Oracle WebCenter Content provides a strategic content infrastructure for managing documents, images, e-mails, and rich media files. With a single repository, organizations can address any content use case, such as accounts payable, HR onboarding, document management, compliance, records management, digital asset management, or Website management. In this session, learn about future plans for how Oracle WebCenter will address new use cases as well as new integrations with Oracle Fusion Middleware and Oracle Applications, leveraging your investments by making your users more productive and error-free. CON8269 - Oracle WebCenter Sites Strategy and Vision Monday, Oct 1, 1:45 PM - 2:45 PM - Moscone West - 3009 Oracle’s Web experience management solution, Oracle WebCenter Sites, enables organizations to use the online channel to drive customer acquisition and brand loyalty. It helps marketers and business users easily create and manage contextually relevant, social, interactive online experiences across multiple channels on a global scale. In this session, learn about future plans for how Oracle WebCenter Sites will provide you with the tools, capabilities, and integrations you need in order to continue to address your customers’ evolving requirements for engaging online experiences and keep moving your business forward. CON8271 - Oracle WebCenter Portal Strategy and Vision Monday, Oct 1, 3:15 PM - 4:15 PM - Moscone West - 3001 To innovate and keep a competitive edge, organizations need to leverage the power of agile and responsive Web applications. Oracle WebCenter Portal enables you to do just that, by delivering intuitive user experiences for enterprise applications to drive innovation with composite applications and mashups. Attend this session to learn firsthand from Oracle WebCenter Portal customers like the Los Angeles Department of Water and Power, extend the value of existing enterprise applications, business processes, and content; delivers a superior business user experience; and maximizes limited IT resources. CON8272 - Oracle Social Network Strategy and Vision Monday, Oct 1, 4:45 PM - 5:45 PM - Moscone West - 3001 One key way of increasing employee productivity is by bringing people, processes, and information together—providing new social capabilities to enable business users to quickly correspond and collaborate on business activities. Oracle WebCenter provides a user engagement platform with social and collaborative technologies to empower business users to focus on their key business processes, applications, and content in the context of their role and process. Attend this session to hear how the latest social capabilities in Oracle Social Network are enabling organizations to transform themselves into social businesses.Attention WebCenter Customers: Last Day to RSVP for WebCenter Customer Appreciation Reception Oracle WebCenter partners Fishbowl Solutions, Fujitsu, Keste, Mythics, Redstone Content Solutions, TEAM Informatics, and TekStream invite Oracle WebCenter customers to a private cocktail reception at one of San Francisco's finest hotels. Please join us and fellow Oracle WebCenter customers for hors d'oeuvres and cocktails at this exclusive reception. Don't miss this opportunity to meet and talk with executives from Oracle WebCenter product management and product marketing, and premier Oracle WebCenter partners. We look forward to seeing you! RSVP today.

    Read the article

  • Backup options in SharePoint 2007

    - by sreejukg
    It is very important to make sure the server farm backup is taking properly, making sure that in case of any disaster, the administrator has the latest backup that can be used to restore. This articles addresses some of the options available for backup/restore in SharePoint 2007 Backup There are two options that can be utilized to take backup of SharePoint sites. Using SharePoint Central Administration website Using SharePoint central administration website, you can do backup/restore from user interface. Using central administration website you can back up the following · Server farm · Web application · Content databases Follow these steps to take backup of the server farm using central administration 1. Open Central administration website 2. Navigate to Operations -> Backup and Restore -> Perform a backup 3. Here you will have options to choose the item to back up. Select Farm (the top most item in the list) 4. Once you select the items to backup, click on “Continue to backup options” 5. Select “Full” as type of backup. 6. In the backup file location, enter the path where you need to store the backup. The path should be according to the UNC, for e.g. for c drive you may use \\server\c$\mybackupFolder 7. Click ok 8. Now you will be redirected to Backup and Restore Status page. This page shows the progress for the backup operation. You can use the refresh button to update the status of backup(this page will automatically refresh in every 30 seconds). Once completed you can find the files in the specified folder. Using STSADM website SharePoint comes with a STSADM command line tool. STSADM provides lot of administrative operations that can be performed on SharePoint 2007 sites. You can find STSADM command from the following location C:\Program Files\Common Files\Microsoft shared\web server extensions\12\bin (You may change the drive letter according to your installation) STSADM provides a method for performing the Office SharePoint Server 2007 administration tasks at the command line or by using batch files or scripts. STSADM provides access to operations not available by using the Central Administration site The general syntax for STSADM is as follows STSADM -operation Operation Name –parameter1 value1 –parameter2 value2 ……….. Using STSADM you can back up the following · Server farm · Web application · Content databases To perform any STSADM, operation you need to be a member of administrators group. Follow these steps to take backup of SharePoint server farm using STSADM tool. Note: make sure you are logged in to the computer where central administration website is installed. 1. Open the Command prompt (You should run command prompt with administrator privileges) 2. Change the working directory to C:\Program Files\Common Files\Microsoft shared\web server extensions\12\bin 3. Enter the command, then press enter Stsadm –o backup -directory <UNC path> -backupmethod full 4. You will get success / failure message once the command finishes. How to schedule the backup There is no option to schedule a backup using central administration site. Also there is no operation provided by STSADM to automate the backup. The farm administrators need to take backup in regular intervals. To achieve this, you can write a batch file that includes STSADM command to take full backup of the server. This batch file can be scheduled using windows task scheduler to execute in certain intervals. Sample of the batch file 1. Open notepad(or any other text editor) 2. Enter the following commands @echo off echo =============================================================== echo Back up the farm to <C:\backup> echo =============================================================== cd %COMMONPROGRAMFILES%\Microsoft Shared\web server extensions\12\BIN @echo off stsadm.exe -o backup -directory "<\backup>" -backupmethod full echo completed 3. Save the file with .bat extension You can schedule this batch file as you require. Other Options Using STSADM tool, you will be able to take backup for individual site collection. The syntax for this is stsadm -o backup -url <URL name for site collection> -filename <file name> [-overwrite] The explanations for the parameters are as follows. -url The url of the site collection you need to backup -filename The name of the backup file. E.g. c:\backup.bak -overwrite optional. Indicates if the filename specified exists, whether to overwrite or not. If you are creating the batch file for scheduling the backup for a site collection, you may need to specify the backup filename automatically created. It is an option that you can generate the filename with date so that you can keep backup for each day. e.g. The following commands can be utilized create a site collection backup. @echo off echo =============================================================== echo Back up the farm to <C:\backup> echo =============================================================== echo =============================================================== echo getting todays date to a variable echo =============================================================== @For /F "tokens=1,2,3 delims=/ " %%A in (‘Date /t’) do @( Set Day=%%A Set Month=%%B Set Year=%%C Set todayDate=%%C%%B%%A ) cd %COMMONPROGRAMFILES%\Microsoft Shared\web server extensions\12\BIN @echo off stsadm -o backup -url <sitecollection url> -filename \\ServerName\ShareName\Backup_%todayDate%.bak -overwrite echo completed To read more about backup STSADM operation, read this http://technet.microsoft.com/en-us/library/cc263441.aspx

    Read the article

  • Help on TileMapRenderer

    - by Crypted
    In my project, I'm trying to render a map using TileMapRenderer. But it doesn't show anything when I render it. But when I use some other files from a tutorial they are rendered correctly. When debugging my TileAtlas instance shows the size as 0. I have used Texture Packer UI for packing the images. Comparing with the tutorial's files, I can see that the index starts from 1 in my file and 0 in the tutorial. But changing it to 0 wouldn't work also. map.png format: RGBA8888 filter: Nearest,Nearest repeat: none Map rotate: false xy: 0, 0 size: 32, 32 orig: 32, 32 offset: 0, 0 index: 1 Map rotate: false xy: 32, 0 size: 32, 32 orig: 32, 32 offset: 0, 0 index: 2 Map rotate: false xy: 64, 0 size: 32, 32 orig: 32, 32 offset: 0, 0 index: 3 Map rotate: false xy: 96, 0 size: 32, 32 orig: 32, 32 offset: 0, 0 index: 4 Map rotate: false xy: 128, 0 size: 32, 32 orig: 32, 32 offset: 0, 0 index: 5 Here is the begining of the tmx file. <?xml version="1.0" encoding="UTF-8"?> <map version="1.0" orientation="orthogonal" width="20" height="20" tilewidth="32" tileheight="32"> <tileset firstgid="1" name="a" tilewidth="32" tileheight="32"> <image source="map.png" width="256" height="32"/> </tileset> <layer name="Tile Layer 1" width="20" height="20"> <data> <tile gid="2"/> <tile gid="2"/> Apart from that the tutorial files and my files seems to be similar. Can anyone help me here.

    Read the article

  • Ogre 3d and bullet physics interaction

    - by Tim
    I have been playing around with Ogre3d and trying to integrate bullet physics. I have previously somewhat successfully got this functionality working with irrlicht and bullet and I am trying to base this on what I had done there, but modifying it to fit with Ogre. It is working but not correctly and I would like some help to understand what it is I am doing wrong. I have a state system and when I enter the "gamestate" I call some functions such as setting up a basic scene, creating the physics simulation. I am doing that as follows. void GameState::enter() { ... // Setup Physics btBroadphaseInterface *BroadPhase = new btAxisSweep3(btVector3(-1000,-1000,-1000), btVector3(1000,1000,1000)); btDefaultCollisionConfiguration *CollisionConfiguration = new btDefaultCollisionConfiguration(); btCollisionDispatcher *Dispatcher = new btCollisionDispatcher(CollisionConfiguration); btSequentialImpulseConstraintSolver *Solver = new btSequentialImpulseConstraintSolver(); World = new btDiscreteDynamicsWorld(Dispatcher, BroadPhase, Solver, CollisionConfiguration); ... createScene(); } In the createScene method I add a light and try to setup a "ground" plane to act as the ground for things to collide with.. as follows. I expect there is issues with this as I get objects colliding with the ground but half way through it and they glitch around like crazy on collision. void GameState::createScene() { m_pSceneMgr->createLight("Light")->setPosition(75,75,75); // Physics // As a test we want a floor plane for things to collide with Ogre::Entity *ent; Ogre::Plane p; p.normal = Ogre::Vector3(0,1,0); p.d = 0; Ogre::MeshManager::getSingleton().createPlane( "FloorPlane", Ogre::ResourceGroupManager::DEFAULT_RESOURCE_GROUP_NAME, p, 200000, 200000, 20, 20, true, 1, 9000,9000,Ogre::Vector3::UNIT_Z); ent = m_pSceneMgr->createEntity("floor", "FloorPlane"); ent->setMaterialName("Test/Floor"); Ogre::SceneNode *node = m_pSceneMgr->getRootSceneNode()->createChildSceneNode(); node->attachObject(ent); btTransform Transform; Transform.setIdentity(); Transform.setOrigin(btVector3(0,1,0)); // Give it to the motion state btDefaultMotionState *MotionState = new btDefaultMotionState(Transform); btCollisionShape *Shape = new btStaticPlaneShape(btVector3(0,1,0),0); // Add Mass btVector3 LocalInertia; Shape->calculateLocalInertia(0, LocalInertia); // CReate the rigid body object btRigidBody *RigidBody = new btRigidBody(0, MotionState, Shape, LocalInertia); // Store a pointer to the Ogre Node so we can update it later RigidBody->setUserPointer((void *) (node)); // Add it to the physics world World->addRigidBody(RigidBody); Objects.push_back(RigidBody); m_pNumEntities++; // End Physics } I then have a method to create a cube and give it rigid body physics properties. I know there will be errors here as I get the items colliding with the ground but not with each other properly. So I would appreciate some input on what I am doing wrong. void GameState::CreateBox(const btVector3 &TPosition, const btVector3 &TScale, btScalar TMass) { Ogre::Vector3 size = Ogre::Vector3::ZERO; Ogre::Vector3 pos = Ogre::Vector3::ZERO; Ogre::Vector3 scale = Ogre::Vector3::ZERO; pos.x = TPosition.getX(); pos.y = TPosition.getY(); pos.z = TPosition.getZ(); scale.x = TScale.getX(); scale.y = TScale.getY(); scale.z = TScale.getZ(); Ogre::Entity *entity = m_pSceneMgr->createEntity( "Box" + Ogre::StringConverter::toString(m_pNumEntities), "cube.mesh"); entity->setCastShadows(true); Ogre::AxisAlignedBox boundingB = entity->getBoundingBox(); size = boundingB.getSize(); //size /= 2.0f; // Only the half needed? //size *= 0.96f; // Bullet margin is a bit bigger so we need a smaller size entity->setMaterialName("Test/Cube"); Ogre::SceneNode *node = m_pSceneMgr->getRootSceneNode()->createChildSceneNode(); node->attachObject(entity); node->setPosition(pos); //node->scale(scale); // Physics btTransform Transform; Transform.setIdentity(); Transform.setOrigin(TPosition); // Give it to the motion state btDefaultMotionState *MotionState = new btDefaultMotionState(Transform); btVector3 HalfExtents(TScale.getX()*0.5f,TScale.getY()*0.5f,TScale.getZ()*0.5f); btCollisionShape *Shape = new btBoxShape(HalfExtents); // Add Mass btVector3 LocalInertia; Shape->calculateLocalInertia(TMass, LocalInertia); // CReate the rigid body object btRigidBody *RigidBody = new btRigidBody(TMass, MotionState, Shape, LocalInertia); // Store a pointer to the Ogre Node so we can update it later RigidBody->setUserPointer((void *) (node)); // Add it to the physics world World->addRigidBody(RigidBody); Objects.push_back(RigidBody); m_pNumEntities++; } Then in the GameState::update() method which which runs every frame to handle input and render etc I call an UpdatePhysics method to update the physics simulation. void GameState::UpdatePhysics(unsigned int TDeltaTime) { World->stepSimulation(TDeltaTime * 0.001f, 60); btRigidBody *TObject; for(std::vector<btRigidBody *>::iterator it = Objects.begin(); it != Objects.end(); ++it) { // Update renderer Ogre::SceneNode *node = static_cast<Ogre::SceneNode *>((*it)->getUserPointer()); TObject = *it; // Set position btVector3 Point = TObject->getCenterOfMassPosition(); node->setPosition(Ogre::Vector3((float)Point[0], (float)Point[1], (float)Point[2])); // set rotation btVector3 EulerRotation; QuaternionToEuler(TObject->getOrientation(), EulerRotation); node->setOrientation(1,(Ogre::Real)EulerRotation[0], (Ogre::Real)EulerRotation[1], (Ogre::Real)EulerRotation[2]); //node->rotate(Ogre::Vector3(EulerRotation[0], EulerRotation[1], EulerRotation[2])); } } void GameState::QuaternionToEuler(const btQuaternion &TQuat, btVector3 &TEuler) { btScalar W = TQuat.getW(); btScalar X = TQuat.getX(); btScalar Y = TQuat.getY(); btScalar Z = TQuat.getZ(); float WSquared = W * W; float XSquared = X * X; float YSquared = Y * Y; float ZSquared = Z * Z; TEuler.setX(atan2f(2.0f * (Y * Z + X * W), -XSquared - YSquared + ZSquared + WSquared)); TEuler.setY(asinf(-2.0f * (X * Z - Y * W))); TEuler.setZ(atan2f(2.0f * (X * Y + Z * W), XSquared - YSquared - ZSquared + WSquared)); TEuler *= RADTODEG; } I seem to have issues with the cubes not colliding with each other and colliding strangely with the ground. I have tried to capture the effect with the attached image. I would appreciate any help in understanding what I have done wrong. Thanks. EDIT : Solution The following code shows the changes I made to get accurate physics. void GameState::createScene() { m_pSceneMgr->createLight("Light")->setPosition(75,75,75); // Physics // As a test we want a floor plane for things to collide with Ogre::Entity *ent; Ogre::Plane p; p.normal = Ogre::Vector3(0,1,0); p.d = 0; Ogre::MeshManager::getSingleton().createPlane( "FloorPlane", Ogre::ResourceGroupManager::DEFAULT_RESOURCE_GROUP_NAME, p, 200000, 200000, 20, 20, true, 1, 9000,9000,Ogre::Vector3::UNIT_Z); ent = m_pSceneMgr->createEntity("floor", "FloorPlane"); ent->setMaterialName("Test/Floor"); Ogre::SceneNode *node = m_pSceneMgr->getRootSceneNode()->createChildSceneNode(); node->attachObject(ent); btTransform Transform; Transform.setIdentity(); // Fixed the transform vector here for y back to 0 to stop the objects sinking into the ground. Transform.setOrigin(btVector3(0,0,0)); // Give it to the motion state btDefaultMotionState *MotionState = new btDefaultMotionState(Transform); btCollisionShape *Shape = new btStaticPlaneShape(btVector3(0,1,0),0); // Add Mass btVector3 LocalInertia; Shape->calculateLocalInertia(0, LocalInertia); // CReate the rigid body object btRigidBody *RigidBody = new btRigidBody(0, MotionState, Shape, LocalInertia); // Store a pointer to the Ogre Node so we can update it later RigidBody->setUserPointer((void *) (node)); // Add it to the physics world World->addRigidBody(RigidBody); Objects.push_back(RigidBody); m_pNumEntities++; // End Physics } void GameState::CreateBox(const btVector3 &TPosition, const btVector3 &TScale, btScalar TMass) { Ogre::Vector3 size = Ogre::Vector3::ZERO; Ogre::Vector3 pos = Ogre::Vector3::ZERO; Ogre::Vector3 scale = Ogre::Vector3::ZERO; pos.x = TPosition.getX(); pos.y = TPosition.getY(); pos.z = TPosition.getZ(); scale.x = TScale.getX(); scale.y = TScale.getY(); scale.z = TScale.getZ(); Ogre::Entity *entity = m_pSceneMgr->createEntity( "Box" + Ogre::StringConverter::toString(m_pNumEntities), "cube.mesh"); entity->setCastShadows(true); Ogre::AxisAlignedBox boundingB = entity->getBoundingBox(); // The ogre bounding box is slightly bigger so I am reducing it for // use with the rigid body. size = boundingB.getSize()*0.95f; entity->setMaterialName("Test/Cube"); Ogre::SceneNode *node = m_pSceneMgr->getRootSceneNode()->createChildSceneNode(); node->attachObject(entity); node->setPosition(pos); node->showBoundingBox(true); //node->scale(scale); // Physics btTransform Transform; Transform.setIdentity(); Transform.setOrigin(TPosition); // Give it to the motion state btDefaultMotionState *MotionState = new btDefaultMotionState(Transform); // I got the size of the bounding box above but wasn't using it to set // the size for the rigid body. This now does. btVector3 HalfExtents(size.x*0.5f,size.y*0.5f,size.z*0.5f); btCollisionShape *Shape = new btBoxShape(HalfExtents); // Add Mass btVector3 LocalInertia; Shape->calculateLocalInertia(TMass, LocalInertia); // CReate the rigid body object btRigidBody *RigidBody = new btRigidBody(TMass, MotionState, Shape, LocalInertia); // Store a pointer to the Ogre Node so we can update it later RigidBody->setUserPointer((void *) (node)); // Add it to the physics world World->addRigidBody(RigidBody); Objects.push_back(RigidBody); m_pNumEntities++; } void GameState::UpdatePhysics(unsigned int TDeltaTime) { World->stepSimulation(TDeltaTime * 0.001f, 60); btRigidBody *TObject; for(std::vector<btRigidBody *>::iterator it = Objects.begin(); it != Objects.end(); ++it) { // Update renderer Ogre::SceneNode *node = static_cast<Ogre::SceneNode *>((*it)->getUserPointer()); TObject = *it; // Set position btVector3 Point = TObject->getCenterOfMassPosition(); node->setPosition(Ogre::Vector3((float)Point[0], (float)Point[1], (float)Point[2])); // Convert the bullet Quaternion to an Ogre quaternion btQuaternion btq = TObject->getOrientation(); Ogre::Quaternion quart = Ogre::Quaternion(btq.w(),btq.x(),btq.y(),btq.z()); // use the quaternion with setOrientation node->setOrientation(quart); } } The QuaternionToEuler function isn't needed so that was removed from code and header files. The objects now collide with the ground and each other appropriately.

    Read the article

  • error: cannot fork() for status: Resource temporarily unavailable (git)

    - by Elnaz Shahmehr
    when I want to do something: add , remove, pull , push in github, I just have this error in my terminal Thanks in advance! selnaz:iOS-Tidinfo Lnaz$ git add . error: cannot fork() for status: Resource temporarily unavailable fatal: Could not run git status --porcelain fatal: git status --porcelain failed fatal: git status --porcelain failed fatal: git status --porcelain failed fatal: git status --porcelain failed fatal: git status --porcelain failed fatal: git status --porcelain failed Edit: selnaz:iOS-Tidinfo Lnaz$ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited file size (blocks, -f) unlimited max locked memory (kbytes, -l) unlimited max memory size (kbytes, -m) unlimited open files (-n) 256 pipe size (512 bytes, -p) 1 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 709 virtual memory (kbytes, -v) unlimited Edit2 selnaz:iOS-Tidinfo Lnaz$ ps xfu | wc -l ps: illegal option -- f usage: ps [-AaCcEefhjlMmrSTvwXx] [-O fmt | -o fmt] [-G gid[,gid...]] [-u] [-p pid[,pid...]] [-t tty[,tty...]] [-U user[,user...]] ps [-L] 0

    Read the article

  • multi-dimension array problem in RGSS (RPG Maker XP)

    - by AzDesign
    This is my first day code script in RMXP. I read tutorials, ruby references, etc and I found myself stuck on a weird problem, here is the scenario: I made a custom script to display layered images Create the class, create an instance variable to hold the array, create a simple method to add an element into it, done The draw method (skipped the rest of the code to this part): def draw image = [] index = 0 for i in 0..@components.size if image.size > 0 index = image.size end image[index] = Sprite.new image[index].bitmap = RPG::Cache.picture(@components[i][0] + '.png') image[index].x = @x + @components[i][1] image[index].y = @y + @components[i][2] image[index].z = @z + @components[i][3] @test =+ 1 end end Create an event that does these script > $layerz = Layerz.new $layerz.configuration[0] = ['root',0,0,1] > $layerz.configuration[1] = ['bark',0,10,2] > $layerz.configuration[2] = ['branch',0,30,3] > $layerz.configuration[3] = ['leaves',0,60,4] $layerz.draw Run, trigger the event and the result : ERROR! Undefined method`[]' for nil:NilClass pointing at this line on draw method : image[index].bitmap = RPG::Cache.picture(@components[i][0] + '.png') THEN, I changed the method like these just for testing: def draw image = [] index = 0 for i in 0..@components.size if image.size > 0 index = image.size end image[index] = Sprite.new image[index].bitmap = RPG::Cache.picture(@components[0][0] + '.png') image[index].x = @x + @components[0][1] image[index].y = @y + @components[0][2] image[index].z = @z + @components[0][3] @test =+ 1 end I changed the @components[i][0] to @components[0][0] and IT WORKS, but only the root as it not iterates to the next array index Im stuck here, see : > in single level array, @components[0] and @components[i] has no problem > in multi-dimension array, @components[0][0] has no problem BUT > in multi-dimension array, @components[i][0] produce the error as above > mentioned. any suggestion to fix the error ? Or did I wrote something wrong ?

    Read the article

  • Why is HTML/Javascript minification beneficial

    - by Channel72
    Why is HTML/Javascript minification beneficial when the HTTP protocol already supports gzip data compression? I realize that Javascript/HTML minification has the potential to significantly reduce the size of Javascript/HTML files by removing unnecessary whitespace, and perhaps renaming variables to a few letters each, but doesn't the LZW algorithm do especially well when there are many repeated characters (e.g. lots of whitespace?) I realize that some Javascript minification tools do more than just reduce size. Google's closure compiler, for example, also tries to improve code performance by inlining functions and doing other analyses. But the primary purpose of Javascript minification is usually to reduce file size. I also realize there are other reasons you might want to minify aside from performace, such as code obfuscation. But again, that reason is not usually emphasized as much as performance gain and file size reduction. For example, Closure Compiler is not advertised as an obfuscation tool, but as a code size reducer and download-speed enhancer. So, how much performance do you really gain from Javascript/HTML minification when you're already significantly reducing file size with gzip compression?

    Read the article

  • How to format FAT32 filesystem infected with windows virus and that is write protected

    - by explorex
    Hi, I have a pendrive with FAT32 filesystem. it is infected with virus dont know which but has autorun.inf and create exe file within folder. I tried to format it with various filesystems and even try to delete it with GParted but couldn't because it says it is write protected i can't even delete files. How to format it? user@explorerx:~$ sudo fdisk -l Disk /dev/sda: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xbd04bd04 Device Boot Start End Blocks Id System /dev/sda1 * 1 498 3998720 82 Linux swap / Solaris Partition 1 does not end on cylinder boundary. /dev/sda2 499 19457 152287585+ f W95 Ext'd (LBA) /dev/sda5 5100 10198 40957686 7 HPFS/NTFS /dev/sda6 10199 14787 36861111 7 HPFS/NTFS /dev/sda7 14788 19457 37511743+ 7 HPFS/NTFS /dev/sda8 499 5099 36956160 83 Linux Partition table entries are not in disk order Disk /dev/sdc: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xc13bc13b Device Boot Start End Blocks Id System /dev/sdc1 1 9729 78143488 7 HPFS/NTFS /dev/sdc2 9729 19457 78143488 7 HPFS/NTFS Disk /dev/sdb: 4194 MB, 4194304000 bytes 112 heads, 47 sectors/track, 1556 cylinders Units = cylinders of 5264 * 512 = 2695168 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sdb1 2 1557 4091904 b W95 FAT32

    Read the article

  • Bandwidth Limit User

    - by user45611
    Hello, i'm saxtor i would like to know how to limit users bandwidth for 10gb per day however i dont want to limit them by ipaddress because if they where to go to an internet cafe the users at the cafe will be restricted with that quota, i need to log them via sockets, example the user request to download a file from http://localhost with there username and password, when they download the file sql will update there bandwidth they used, i have a script here but its not working my buffer doesnt work that rate when a user uses multiple connections thanks for the help!. /** * @author saxtor if you can improve this code email me @saxtorinc.com * @copyright 2010 / /* * CREATE TABLE IF NOT EXISTS max_traffic ( id int(255) NOT NULL AUTO_INCREMENT, limit int(255) NOT NULL, PRIMARY KEY (id) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=0 ; */ //SQL Connection [this is hackable for testing] date_default_timezone_set("America/Guyana"); mysql_connect("localhost", "root", "") or die(mysql_error()); mysql_select_db("Quota") or die(mysql_error()); function quota($id) { $result = mysql_query("SELECT `limit` FROM max_traffic WHERE id='$id' ") or die(error_log(mysql_error()));; $row = mysql_fetch_array($result); return $row[0]; } function update_quota($id,$value) { $result = mysql_query("UPDATE `max_traffic` SET `limit`='$value' WHERE id='$id'") or die(mysql_error()); return $value; } if ( quota(1) != 0) $limit = quota(1); else $limit = 0; $multipart = false; //was a part of the file requested? (partial download) $range = $_SERVER["HTTP_RANGE"]; if ($range) { //pass client Range header to rapidshare // _insert($range); $cookie .= "\r\nRange: $range"; $multipart = true; header("X-UR-RANGE-Range: $range"); } $url = 'http://127.0.0.1/puppy.iso'; $filename = basename($url); //octet-stream + attachment = client always stores file header('Content-type: application/octet-stream'); header('Content-Disposition: attachment; filename="'.$filename.'"'); //always included so clients know this script supports resuming header("Accept-Ranges: bytes"); //awful hack to pass rapidshare the premium cookie $user_agent = ini_get("user_agent"); ini_set("user_agent", $user_agent . "\r\nCookie: enc=$cookie"); $httphandle = fopen($url, "r"); $headers = stream_get_meta_data($httphandle); $size = $headers["wrapper_data"][6]; $sizer = explode(' ',$size); $size = $sizer[1]; //let's check the return header of rapidshare for range / length indicators //we'll just pass these to the client foreach ($headers["wrapper_data"] as $header) { $header = trim($header); if (substr(strtolower($header), 0, strlen("content-range")) == "content-range") { // _insert($range); header($header); header("X-RS-RANGE-" . $header); $multipart = true; //content-range indicates partial download } elseif (substr(strtolower($header), 0, strlen("Content-Length")) == "content-length") { // _insert($range); header($header); header("X-RS-CL-" . $header); } } if ($multipart) header('HTTP/1.1 206 Partial Content'); flush(); $speed = 4128; $packet = 1; //this is private dont touch. $bufsize = 128; //this is private dont touch/ $bandwidth = 0; //this is private dont touch. while (!(connection_aborted() || connection_status() == 1) && $size > 0) { while (!feof($httphandle) && $size > 0) { if ($limit <= 0 ) $size = 0; if ( $size < $bufsize && $size != 0 && $limit != 0) { echo fread($httphandle,$size); $bandwidth += $size; } else { if( $limit != 0) echo fread($httphandle,$bufsize); $bandwidth += $bufsize; } $size -= $bufsize; $limit -= $bufsize; flush(); if ($speed > 0 && ($bandwidth > $speed*$packet*103)) { usleep(100000); $packet++; //update_quota(1,$limit); } error_log(update_quota(1,$limit)); $limit = quota(1); //if( $size <= 0 ) // exit; } fclose($httphandle); } exit; ?

    Read the article

  • Getting "-bash: fork: Resource temporarily unavailable" in OSX

    - by Joseph Tura
    I seem to run into problems with the max. number of processes every so often. Anyone know what is best practice for fixing this? Running OSX 10.6 on a MacBook Pro i7. ulimit -a returns these values: core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited file size (blocks, -f) unlimited max locked memory (kbytes, -l) unlimited max memory size (kbytes, -m) unlimited open files (-n) 256 pipe size (512 bytes, -p) 1 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 266 virtual memory (kbytes, -v) unlimited When the error occurred I checked, and there were 102 running tasks and 523 threads.

    Read the article

  • Oracle announces Brand New Tuxedo 11g Release

    - by ruma.sanyal
    Today Oracle introduced two brand new products within the Tuxedo product line of its application grid portfolio. Oracle Tuxedo Application Runtime for CICS and Batch and Oracle Application Rehosting Workbench provide the ability to automate rehosting of mainframe Online and Batch applications to open systems running under Oracle Tuxedo. Oracle Application Rehosting Workbench automates adaptation of COBOL programs, JCL conversion for batch applications, and migration of VSAM files and DB2 data schema. Migration cost, risk, and project length and complexity are dramatically reduced with over 90% of application assets re-hosted on open systems 'as-is'. Impact on the organization is minimized - users are protected from change by support for 3270 green screens, and developers continue to use familiar CICS APIs, batxh functions, and common utilities. Other major features of this release are as follows: - Hotpluggability through introduction of Oracle Tuxedo JCA Adapter - Metadata driven application development using SCA programming model - Support for Python and Ruby languages to develop business services - Improved scalability and availability, TSAM enhancements Register for a live webinar with Oracle Fusion Middleware Senior VP Hasan Rizvi Read the press release Find more details on these exciting new products

    Read the article

  • Solaris cc segfaults when compiling rsync on intel platform

    - by PP
    I am trying to compile rsync-3.0.7 on Solaris 5.10 on an Intel chipset. When running ./configure I see the following (obviously erroneous lines): checking size of int... 0 checking size of long... 0 checking size of long long... 0 checking size of short... 0 checking size of int16_t... 0 checking size of uint16_t... 0 In config.log I see the following lines: configure.sh:5448: /tool/sunstudio12.1/bin/cc -xc99=all -o conftest -g -DHAVE_CONFIG_H conftest.c >&5 "conftest.c", line 123: warning: statement not reached cc: Fatal error in cc : Segmentation Fault configure.sh:5448: $? = 1 configure.sh: program exited with status 1 Segmentation fault? What could be causing a simple test script to segfault during compilation?

    Read the article

  • OS X Lion - Installing Oracle 10g Standard Edition

    - by Cellze
    Im trying to install oracle 10g on to OS X Lion, I have previous achieved this on snow leopard with the following http://blog.rayapps.com/2009/09/14/how-to-install-oracle-database-10g-on-mac-os-x-snow-leopard/ The issue im having is that the ulimit settings in the oracle/.bash_profile cannot be modified. I have the following in the bash_profile: export DISPLAY=:0.0 export ORACLE_BASE=$HOME umask 022 # must match `sysctl kern.maxprocperuid` ulimit -Hu 512 ulimit -Su 512 # must match `sysctl kern.maxfilesperproc` ulimit -Hn 10240 ulimit -Sn 10240 Upon applying the bash_profile settings . ~/.bash_profile i get the following error: -bash: ulimit: max user processes: cannot be modify limit: Invalid argument This then results in $ sqlplus / as sysdba not functioning correctly with a Segmentation fault: 11 The output of $ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited file size (blocks, -f) unlimited max locked memory (kbytes, -l) unlimited max memory size (kbytes, -m) unlimited open files (-n) 10240 pipe size (512 bytes, -p) 1 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 512 virtual memory (kbytes, -v) unlimited If any one knows how I can apply these ulimit settings to the oracle user I have created to allow me to install sqlplus and therefore create a db, that would be great.

    Read the article

  • Raid Shows Up as Multiple Drives - Can't Mount

    - by manyxcxi
    I have a single hard drive that the OS is installed on and I have Sil raid card installed with two matching 500GB hdds set up in Raid 0 and formatted- they're completely empty. For whatever reason they are showing up as /dev/sdb and /dev/sdc and not as a single hard drive. I used fdisk to format both raid drives as Linux raid auto (fd) but I cannot mount either device and dmraid doesn't seem to want to work, what step am I missing? When I installed 9.04 oh so long ago it seems like it recognized and automatically did everything that needed to be done, now I'm stuck. dmraid Output root@tripoli:~# dmraid -r /dev/sdc: sil, "sil_biaebhadcfcb", stripe, ok, 976771072 sectors, data@ 0 /dev/sdb: sil, "sil_biaebhadcfcb", stripe, ok, 976771072 sectors, data@ 0 root@tripoli:~# dmraid -ay RAID set "sil_biaebhadcfcb" already active fdisk Output root@tripoli:~# fdisk -l Disk /dev/sda: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000b9b01 Device Boot Start End Blocks Id System /dev/sda1 * 1 32 248832 83 Linux Partition 1 does not end on cylinder boundary. /dev/sda2 32 60802 488134657 5 Extended /dev/sda5 32 60802 488134656 8e Linux LVM Disk /dev/sdb: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x6ead5c9a Device Boot Start End Blocks Id System /dev/sdb1 1 60801 488384001 fd Linux raid autodetect Disk /dev/sdc: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xe6e2af28 Device Boot Start End Blocks Id System /dev/sdc1 1 60801 488384001 fd Linux raid autodetect

    Read the article

  • Kernel panic on reboot after failed logical volume resize

    - by Derek
    I attempted to do a logical volume resize yesterday using the follwoing commands $sudo pvdisplay "/dev/sda8" is a new physical volume of "113.11 GiB" --- NEW Physical volume --- PV Name /dev/sda8 VG Name PV Size 113.11 GiB Allocatable NO PE Size 0 Total PE 0 Free PE 0 Allocated PE 0 PV UUID jwyO1o-b2ap-CW51-kx7O-kf26-arim-SM8V6m $sudo vgextend vg /dev/sda8 sudo vgdisplay vg --- Volume group --- VG Name vg System ID Format lvm2 Metadata Areas 2 Metadata Sequence No 9 VG Access read/write VG Status resizable MAX LV 0 Cur LV 5 Open LV 5 Max PV 0 Cur PV 2 Act PV 2 VG Size 131.74 GiB PE Size 4.00 MiB Total PE 33725 Alloc PE / Size 4769 / 18.63 GiB Free PE / Size 28956 / 113.11 GiB VG UUID AhusW2-pzFv-3W32-mpv2-s5VG-FN7S-kVSadx $sudo lvresize -L +20GB /dev/mapper/vg-var So as you can see, it looks like adding the physical volume to the vg worked, because i see free space available there. When I typed the lvresize command, it never returned. I let this run overnight in the background, but this morning I still couldnt successfully do a "pvdisplay" or "lvdisplay" because I think it was waiting on a lock or something, so the command never returned. When i went to log onto the server's console, I saw a bunch of messages like: rcu_sched_state detected stall on cpu 2 Now when I boot, I get a kernel panic error, and a message about not being able to mount /mapper/vg-root cannot open root device "mapper/vg-root" or unknown_block(0,0) Kernel Panic -not syncing: VFS: Unable to mount root file system on unknown_block(0,0) What should I do to get my system back up and running? Did I attempt to do the logical volume resize correctly? Thanks

    Read the article

  • Texture2D.GetData fails to return pixel colour data

    - by Chris Charabaruk
    Because I'm using sprite sheets instead of an individual texture per sprite, I need to pass in a Rectangle when calling Texture2D.GetData() in my collision detection for per-pixel tests. Unfortunately, without fail I get an ArgumentException percolated down from an internal method inside the Texture (not Texture2D) class. My code for getting the texture data looks like this: public override Color[] GetPixelData() { Color[] data = new Color[(int)size.Product()]; Rectangle rect = new Rectangle(hframe * (int)size.X, vframe * (int)size.Y, (int)size.X, (int)size.Y); #if DEBUG if (sprite.Bounds.Contains(rect) && sprite.Format == SurfaceFormat.Color) #endif sprite.GetData(0, rect, data, 0, 1); return data; } Even with the check to ensure I'm grabbing a valid rectangle and that the texture format matches what I'm trying to get, I still get that exception, claiming "The size of the data passed in is too large or too small for this resource." Unfortunately, the debugger won't let me check the locals within the Texture.ValidateTotalSize() method where the exception originates. Has anyone else had this problem and knows how to fix it? I'm relying on AABB testing only for now, but that doesn't really work for some of my game's entities due to odd shapes, rotation and scaling.

    Read the article

  • Software, script or a tool to automate managing which tests to run

    - by laggingreflex
    I have a batch file that lists all the test files I have and asks me which test I want to perform, like Test. [U]nit, [I]ntegration : i (user input) Integration. [A]ll, [2][U]serInteraction, [3][R]esultGeneration : u 2 User Interaction. Running "mocha integration\2userint.js" ... So essentially I have configured a batch "option" for each test file I have, which I can choose to run individually or all together. But adding and removing tests is a pain. I have to update the batch file everytime a new file is added or changed. Is there a software, script or a tool, that does this automatically, or makes it easier for me to do so? I basically need it to be aware of and ask me which file(s) I want to test. A GUI with checkboxes would be ultimate! but I'll take anything. I'm working in node.js

    Read the article

  • Change collision action

    - by PatrickR
    I have a collision detection and its working fine, the problem is, that whenever my "bird" is hitting a "cloud", the cloud dissapers and i get some points. The same happens for the "sol" which it should, but not with the clouds. How can this be changed ? ive tryed a lot, but can seem to figger it out. Collision Code - (void)update:(ccTime)dt { bird.position = ccpAdd(bird.position, skyVelocity); NSMutableArray *projectilesToDelete = [[NSMutableArray alloc] init]; for (CCSprite *bird in _projectiles) { bird.anchorPoint = ccp(0, 0); CGRect absoluteBox = CGRectMake(bird.position.x, bird.position.y, [bird boundingBox].size.width, [bird boundingBox].size.height); NSMutableArray *targetsToDelete = [[NSMutableArray alloc] init]; for (CCSprite *cloudSprite in _targets) { cloudSprite.anchorPoint = ccp(0, 0); CGRect absoluteBox = CGRectMake(cloudSprite.position.x, cloudSprite.position.y, [cloudSprite boundingBox].size.width, [cloudSprite boundingBox].size.height); if (CGRectIntersectsRect([bird boundingBox], [cloudSprite boundingBox])) { [targetsToDelete addObject:cloudSprite]; } } for (CCSprite *solSprite in _targets) { solSprite.anchorPoint = ccp(0, 0); CGRect absoluteBox = CGRectMake(solSprite.position.x, solSprite.position.y, [solSprite boundingBox].size.width, [solSprite boundingBox].size.height); if (CGRectIntersectsRect([bird boundingBox], [solSprite boundingBox])) { [targetsToDelete addObject:solSprite]; score += 50/2; [scoreLabel setString:[NSString stringWithFormat:@"%d", score]]; } } // NÅR SKYEN BLIVER RAMT AF FUGLEN for (CCSprite *cloudSprite in targetsToDelete) { //[_targets removeObject:cloudSprite]; //[self removeChild:cloudSprite cleanup:YES]; } // NÅR SOLEN BLIVER RAMT AF FUGLEN for (CCSprite *solSprite in targetsToDelete) { [_targets removeObject:solSprite]; [self removeChild:solSprite cleanup:YES]; } if (targetsToDelete.count > 0) { [projectilesToDelete addObject:bird]; } [targetsToDelete release]; } // NÅR FUGLEN BLIVER RAMT AF ALT ANDET for (CCSprite *bird in projectilesToDelete) { //[_projectiles removeObject:bird]; //[self removeChild:bird cleanup:YES]; } [projectilesToDelete release]; }

    Read the article

  • Unable to mount an LVM Hard-drive after upgrade

    - by Bruce Staples
    I imagine this is a basic gotcha ... but I can't see it. I have a system with 2(physical) harddrives. The boot system (/dev/sda) was running 10.04 & the second drive (/dev/sdb) was just a mounted filesystem. I did a clean load of Ubuntu 12.04 overwriting /dev/sda (not an upgrade) & now cannot mount the second drive. so I do not know what to enter it into the fstab ... I had expected to use: /dev/sdb /tera ext4 defaults 0 2 But even manual mounting fails (I also have tried various "-t" options on the off chance!) sudo mount -t ext4 /dev/sdb1 /tera mount: wrong fs type, bad option, bad superblock on /dev/sdb1, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so Output from disk queries indicate that it is a Linux LVM & a healthy disk still. sudo lshw -C disk *-disk:0 description: ATA Disk product: WDC WD5000AACS-0 vendor: Western Digital physical id: 0 bus info: scsi@2:0.0.0 logical name: /dev/sda version: 01.0 serial: WD-WCASU1401098 size: 465GiB (500GB) capabilities: partitioned partitioned:dos configuration: ansiversion=5 signature=00015a55 *-disk:1 description: ATA Disk product: WDC WD10EADS-00L vendor: Western Digital physical id: 1 bus info: scsi@3:0.0.0 logical name: /dev/sdb version: 01.0 serial: WD-WCAU47836304 size: 931GiB (1TB) capabilities: partitioned partitioned:dos configuration: ansiversion=5 sudo fdisk -l Disk /dev/sda: 500.1 GB, 500106780160 bytes 255 heads, 63 sectors/track, 60801 cylinders, total 976771055 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00015a55 Device Boot Start End Blocks Id System /dev/sda1 * 2048 972580863 486289408 83 Linux /dev/sda2 972582910 976769023 2093057 5 Extended /dev/sda5 972582912 976769023 2093056 82 Linux swap / Solaris Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sdb1 1 1953525167 976762583+ 8e Linux LVM LVM doesn't appear to be an option for mount or fstab. ... and here's a Smart data Screenshot from Disk Utility.

    Read the article

  • Unable to mount USBDRIVE Error creating moint point: Permission denied

    - by steve
    Whenever I plug a usb into my computer a window pops up and says Unable to mount [Name of USB] Error creating moint point: Permission denied steve@goliath:/$ uname -a Linux goliath 3.2.0-32-generic #51-Ubuntu SMP Wed Sep 26 21:33:09 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux steve@goliath:/$ sudo fdisk -l WARNING: GPT (GUID Partition Table) detected on '/dev/sda'! The util fdisk doesn't support GPT. Use GNU Parted. Disk /dev/sda: 120.0 GB, 120034123776 bytes 255 heads, 63 sectors/track, 14593 cylinders, total 234441648 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0f716ee1 Device Boot Start End Blocks Id System /dev/sda1 1 234441647 117220823+ ee GPT WARNING: GPT (GUID Partition Table) detected on '/dev/sdb'! The util fdisk doesn't support GPT. Use GNU Parted. Disk /dev/sdb: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders, total 2930277168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0f710ee1 Device Boot Start End Blocks Id System /dev/sdb1 1 2930277167 1465138583+ ee GPT Disk /dev/sdc: 16.0 GB, 16005464064 bytes 74 heads, 10 sectors/track, 42244 cylinders, total 31260672 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xc3072e18 Device Boot Start End Blocks Id System /dev/sdc1 8064 31260671 15626304 c W95 FAT32 (LBA) steve@goliath:/$ sudo mkdir /media/external mkdir: cannot create directory `/media/external': Permission denied steve@goliath:/$ sudo mkdir /media/usb0 mkdir: cannot create directory `/media/usb0': Permission denied steve@goliath:/$ sudo ls -l / | grep media drwxr-xr-x 3 root root 4096 Oct 3 22:48 media steve@goliath:/$ ls /media/ -a . .. MediaShare MediaShare is the the directory on my server that has all my movies and music. If there is any information I left out please let me know.

    Read the article

< Previous Page | 126 127 128 129 130 131 132 133 134 135 136 137  | Next Page >