eellife
1,000+ Views

入力の極みを知る人の、21世紀タイプライタ「freewrite」

パソコンとディスプレイとキーボードとインターネットが
バラバラになってきたこれからの道具。
ちょっと前までパソコンはなんでもできるがトゥーマッチな道具だった。
仕事で使う人間には必要ではあるが、パソコンの機能をすべて使いこなす人はいないだろう。「道具」としてはいまだベストとはいいがたい。
キーボードも普通の人にはボタンが多過ぎるかもしれない(笑)
物理的なキーボードの入力の優秀さとインターネットを存分に使う事とはイコールではないのだ。(「blackberry」という存在があるがここでは割愛する)
スマホ、タブレット、Iotなどなどもはや自由な関係で良い時代になったのだ。例えばディスプレイのみでネットを活用する方向は、仮想キーボードや音声入力など現在模索中だ(ほぼフリックという人は少なくないだろうが)。
しかしキーボードの単体での価値の向上はどうだろうか。
以前紹介したPortabookは、キングジムが入力面だけを切り分けた道具であったポメラの発展形であったが、やはり道具としての魅力が増幅したかといえば、やや残念であった。
現時点で、キーボードは入力装置として最高の仕組みだ。しかしこれを魅力を格上げする流れはあまり無いようだ。界隈で名が知られているのは「東プレ」のREALFORCEだろうか。とにかく疲れにくく壊れにくいメカニカルキーボードとして有名だ。これを発展させることはできないか。次のキーボードとは。
ということで今回見つけたのがポメラとREALFORCEの合体ともいうべき、「フリーライタ」である。
モノカキ向け作業没頭用タイプライターFreewrite発売。
日本語対応、最大4週間持続のバッテリーでどこでも執筆
最近のノートパソコンでは実現できないしっかりと気持ちよく押せるキーボード。
必要な情報だけが表示される表示部。なにより思想を反映しているのが、右と左についているつまみだ。右はクラウド(icloudやEvernoteにもつながるという)につなぐかどうかの切り替え、左は保存する3か所をえらぶだけ、という。
重さは約1.8kg。確かに重たいがむしろ軽いようではしっかりした打鍵ができない。正しい選択だと思う。
現在は$449。決して安くはないが、集中した入力時間を得ることができるフリーライタは、次のキーボードと言ってもいいのではないだろうか。
ソリティアなどやって時間を過ごす社員のいる新聞社には、ぜひ導入すべきではないかと思う。
日本語もしっかり対応言語にはいっているようだ。
-------
What languages does the Freewrite support?
At the time of this writing, we have committed support for the following languages:
English (QWERTY, DVORAK, and COLEMAK), International English, German, Italian, Portuguese, Spanish, French, Danish, Swedish, Greek, Dutch, Turkish, Hebrew, Korean, Russian, Chinese, and Japanese.
-------
Comment
Suggested
Recent
Cards you may also be interested in
絶妙な色合いをシールだけで表現した「丸シールアート」がすごい!
マステをつかったアート展、というのがvingleでも投稿されていたけど、こちらは「丸シール」をつかった絵だそうです。見出しや分類に使うような「丸シール」を重ねて作らているんです。 この画像の絵は、北村佳奈さんというアーティストの方の作品なのですが、言われてみると「丸い」形は見えますけど、それが全て丸シールとはわからないですよね。 使われているのは、こちらの丸シール。 発色や質感にもこだわった、薄くて丈夫な和紙素材を使用しているもので、重ねて貼ることで、多彩な色の変化が楽しめるんだそう。 シンプルなただの丸シールがこんな絵になるなんて・・・・びっくりです。 製作過程を撮った動画がこちらです。 (もちろん、早回し動画です!) 絶妙な色合いが、シールだけでこんな風に表現できるとは・・・ちょっと感動です。 丸シールアート、自分もやってみたい!と思えてきた方、3月6日に丸シールアートのワークショップがあるそうです。講師は先ほど紹介した北村佳奈さん。モチーフは「猫」で作品を作るそうです。初心者もオッケーみたいだし、ちょっときになるなあ〜 http://stalogy.com/news/7755
セルカ棒よりは接写OKなセルカレンズ
自撮りしない私はセルカ棒に無関心。だがしかし、最近「セルカレンズ」なるものが出回って、しかも接写できるというじゃないですか。お値段も1000円程度だってことで、さっそく入手してみました。(今調べたら300円台で買えるみたいなんですが…) そもそもセルカ棒の使用があちこちで規制され始めたので、広角レンズの方がいんじゃね?ってことで企画され、どうせなら単価上げるために魚眼とマクロもつければいんじゃね?って誰かが言ったとか言わなかったとか知りませんがとりあえず3点セットでのパッケージが現在スタンダードなようです。 全構成は以下の通り。 ・取り付けクリップ ・FISH EYE LENS 180°(魚眼レンズ)&キャップ ・MACRO LENS + 0.67X WIDE LENS(マクロレンズ+広角レンズ)&キャップ ・巾着 マクロと広角がくっついてるので、不器用な人はこれをつけたり外したりするだけでもちょっと面倒そうです。手の大きな男性なんかイラっときそう。 こんな時はレンズの倍率が活きる被写体を試すべきでしょうが、とりあえずデスクに常備の癒しグッズで試し撮り。これはノーマルな状態で撮影。 大雑把にだいたい同じ距離から撮影してますが、天地をトリミングしちゃったんで魚眼感は基本的にあまり感じられません。まあ、端がまるまる程度です。そもそも被写体が間違ってるというのが正しい感想。 広角レンズ。これまた被写体が間違ってるのですが、とりあえず右側のアダプタまで写ってるところが広角なのかな、程度の感想。きっと自撮りしたらセルカ棒程度の画角が収められるのかもしれませんが、そもそも魚眼も広角も個人的に興味がないのです。 マクロ。当然この距離から撮ったところで、被写体がクリアに見えるわけもなく。とりあえずピントが合う距離まで寄ってみることにします。 ここまで寄ってようやく文字が識別可能に。これ、被写体とレンズの間隔は1cmちょいです。ここまで寄るとなれば超ピンポイントで収めないといけないし、そもそも近づきすぎるので影の映り込みや手元の明るさは相当工夫が必要そう…まあ室内でこんなもの撮ってるから悪いんですけどきっと。 ちなみに、端末にもよるでしょうがiPhone5Sのフラッシュ発光部はクリップで塞がれます。 木目の中央に約2〜3mmサイズの茶色いスポット。こんなのを撮りたい時に使えばいいんだなというのが確認できましたがそんなのが撮りたい時がくるのかどうか。本当は花の接写がしたかったんだけど、ここまで寄らなければならないとなると願望とは裏腹に花粉の気持ち悪いところとかが写っちゃっいそうな予感。女子の肌を写して見せつける嫌がらせなどには大いに使えそうですが。 というわけで、喜び勇んで買った割に、そもそも一番使いたかったマクロが微妙な塩梅なのでちょーっとトーンダウンしている真っ最中です。マイクロスコープ6倍っていう方を買えばよかったかな... だがしかし、この試し撮り最大の問題は「被写体」であろうという希望的観測を元に、もう少しいじってみたいと思います。 ちなみに、携帯をクリップで挟んでレンズを固定するというとても単純な作りなので、厚み2cm以下のガジェットなら基本的になんでも使えると思いますが、当然のことながら端末レンズとこのレンズの間のスペースは少ないにこしたことはなく、大方のケース(カバー)利用者はまずそれを外すところから始めなければなりません。 クリップで挟むだけなら一見手軽なようですが、「あ、これイイ!」と思ってからケースを外してクリップのレンズを付け替えてレンズカバーを外してクリップを留める、とかやってても待っててくれる被写体だけに割り切って使わないと毎日残念な思いをすることになります。 じゃあこんだけトーンダウンしてるのになぜ紹介してるかというと、誰か一緒に「やっぱり買ってよかったじゃん!」と思えるまで探求してほしいから一応勧めてるつもりなんです(笑)。同じ想いをしてやるぜ、という奇特な方をお待ちしております。。 UNIVERSAL CLIP LENS (made in china) 実売は300円台から1000円くらいまで?Amazonが安いかな 楽天でもAmazonでもPLAZAでも
気持ちを伝えるペーパーカード展の作品色々@KITTE
KITTEの4階にマルノウチリーディングスタイルという場所があります。ヴィレッジヴァンガードに蔦屋書店を入れたような散財間違いなしの場所ですが、現在「かみの工作所」のペーパーカードデザインコンペ2015の結果を展示しています。 基本的に製品化が検討されており、一部先行発売もあります。数は多くありませんが、ユニークなアイディアが見られて面白いので紙製品が好きな人にお知らせしたいと思います。 僕の好きな作品1、塚田萌氏の「星空の封筒」 簡単なことなんですが、個人的に小さいものを覗いた中に大きいものが広がるというこの展開に弱いです。 僕の好きな作品2、KANTARO氏の「紙を織る」 単純にこの手作業楽しそうですよ。あと単純なものから千の可能性を作り出すポテンシャルが好きです。(なぜか僕のコメント偉そうですね。) 僕の好きな作品3、薄上紘太郎氏の「Peel so good」 名前も好きです。これいいですよね。剥くひと手間が楽しいし多分どきどきします。思わせぶりなことを書いておきながら最後まで見えた時に大どんでん返しというような展開もあり得ます。平面なのに時間軸を生み出しているところがすごいと思います。
(April-2021)Braindump2go DAS-C01 PDF and DAS-C01 VCE Dumps(Q88-Q113)
QUESTION 88 An online gaming company is using an Amazon Kinesis Data Analytics SQL application with a Kinesis data stream as its source. The source sends three non-null fields to the application: player_id, score, and us_5_digit_zip_code. A data analyst has a .csv mapping file that maps a small number of us_5_digit_zip_code values to a territory code. The data analyst needs to include the territory code, if one exists, as an additional output of the Kinesis Data Analytics application. How should the data analyst meet this requirement while minimizing costs? A.Store the contents of the mapping file in an Amazon DynamoDB table. Preprocess the records as they arrive in the Kinesis Data Analytics application with an AWS Lambda function that fetches the mapping and supplements each record to include the territory code, if one exists. Change the SQL query in the application to include the new field in the SELECT statement. B.Store the mapping file in an Amazon S3 bucket and configure the reference data column headers for the .csv file in the Kinesis Data Analytics application. Change the SQL query in the application to include a join to the file's S3 Amazon Resource Name (ARN), and add the territory code field to the SELECT columns. C.Store the mapping file in an Amazon S3 bucket and configure it as a reference data source for the Kinesis Data Analytics application. Change the SQL query in the application to include a join to the reference table and add the territory code field to the SELECT columns. D.Store the contents of the mapping file in an Amazon DynamoDB table. Change the Kinesis Data Analytics application to send its output to an AWS Lambda function that fetches the mapping and supplements each record to include the territory code, if one exists. Forward the record from the Lambda function to the original application destination. Answer: C QUESTION 89 A company has collected more than 100 TB of log files in the last 24 months. The files are stored as raw text in a dedicated Amazon S3 bucket. Each object has a key of the form year-month- day_log_HHmmss.txt where HHmmss represents the time the log file was initially created. A table was created in Amazon Athena that points to the S3 bucket. One-time queries are run against a subset of columns in the table several times an hour. A data analyst must make changes to reduce the cost of running these queries. Management wants a solution with minimal maintenance overhead. Which combination of steps should the data analyst take to meet these requirements? (Choose three.) A.Convert the log files to Apace Avro format. B.Add a key prefix of the form date=year-month-day/ to the S3 objects to partition the data. C.Convert the log files to Apache Parquet format. D.Add a key prefix of the form year-month-day/ to the S3 objects to partition the data. E.Drop and recreate the table with the PARTITIONED BY clause. Run the ALTER TABLE ADD PARTITION statement. F.Drop and recreate the table with the PARTITIONED BY clause. Run the MSCK REPAIR TABLE statement. Answer: BCF QUESTION 90 A company has an application that ingests streaming data. The company needs to analyze this stream over a 5-minute timeframe to evaluate the stream for anomalies with Random Cut Forest (RCF) and summarize the current count of status codes. The source and summarized data should be persisted for future use. Which approach would enable the desired outcome while keeping data persistence costs low? A.Ingest the data stream with Amazon Kinesis Data Streams. Have an AWS Lambda consumer evaluate the stream, collect the number status codes, and evaluate the data against a previously trained RCF model. Persist the source and results as a time series to Amazon DynamoDB. B.Ingest the data stream with Amazon Kinesis Data Streams. Have a Kinesis Data Analytics application evaluate the stream over a 5-minute window using the RCF function and summarize the count of status codes. Persist the source and results to Amazon S3 through output delivery to Kinesis Data Firehouse. C.Ingest the data stream with Amazon Kinesis Data Firehose with a delivery frequency of 1 minute or 1 MB in Amazon S3. Ensure Amazon S3 triggers an event to invoke an AWS Lambda consumer that evaluates the batch data, collects the number status codes, and evaluates the data against a previously trained RCF model. Persist the source and results as a time series to Amazon DynamoDB. D.Ingest the data stream with Amazon Kinesis Data Firehose with a delivery frequency of 5 minutes or 1 MB into Amazon S3. Have a Kinesis Data Analytics application evaluate the stream over a 1-minute window using the RCF function and summarize the count of status codes. Persist the results to Amazon S3 through a Kinesis Data Analytics output to an AWS Lambda integration. Answer: B QUESTION 91 An online retailer needs to deploy a product sales reporting solution. The source data is exported from an external online transaction processing (OLTP) system for reporting. Roll-up data is calculated each day for the previous day's activities. The reporting system has the following requirements: - Have the daily roll-up data readily available for 1 year. - After 1 year, archive the daily roll-up data for occasional but immediate access. - The source data exports stored in the reporting system must be retained for 5 years. Query access will be needed only for re-evaluation, which may occur within the first 90 days. Which combination of actions will meet these requirements while keeping storage costs to a minimum? (Choose two.) A.Store the source data initially in the Amazon S3 Standard-Infrequent Access (S3 Standard-IA) storage class. Apply a lifecycle configuration that changes the storage class to Amazon S3 Glacier Deep Archive 90 days after creation, and then deletes the data 5 years after creation. B.Store the source data initially in the Amazon S3 Glacier storage class. Apply a lifecycle configuration that changes the storage class from Amazon S3 Glacier to Amazon S3 Glacier Deep Archive 90 days after creation, and then deletes the data 5 years after creation. C.Store the daily roll-up data initially in the Amazon S3 Standard storage class. Apply a lifecycle configuration that changes the storage class to Amazon S3 Glacier Deep Archive 1 year after data creation. D.Store the daily roll-up data initially in the Amazon S3 Standard storage class. Apply a lifecycle configuration that changes the storage class to Amazon S3 Standard-Infrequent Access (S3 Standard- IA) 1 year after data creation. E.Store the daily roll-up data initially in the Amazon S3 Standard-Infrequent Access (S3 Standard-IA) storage class. Apply a lifecycle configuration that changes the storage class to Amazon S3 Glacier 1 year after data creation. Answer: BE QUESTION 92 A company needs to store objects containing log data in JSON format. The objects are generated by eight applications running in AWS. Six of the applications generate a total of 500 KiB of data per second, and two of the applications can generate up to 2 MiB of data per second. A data engineer wants to implement a scalable solution to capture and store usage data in an Amazon S3 bucket. The usage data objects need to be reformatted, converted to .csv format, and then compressed before they are stored in Amazon S3. The company requires the solution to include the least custom code possible and has authorized the data engineer to request a service quota increase if needed. Which solution meets these requirements? A.Configure an Amazon Kinesis Data Firehose delivery stream for each application. Write AWS Lambda functions to read log data objects from the stream for each application. Have the function perform reformatting and .csv conversion. Enable compression on all the delivery streams. B.Configure an Amazon Kinesis data stream with one shard per application. Write an AWS Lambda function to read usage data objects from the shards. Have the function perform .csv conversion, reformatting, and compression of the data. Have the function store the output in Amazon S3. C.Configure an Amazon Kinesis data stream for each application. Write an AWS Lambda function to read usage data objects from the stream for each application. Have the function perform .csv conversion, reformatting, and compression of the data. Have the function store the output in Amazon S3. D.Store usage data objects in an Amazon DynamoDB table. Configure a DynamoDB stream to copy the objects to an S3 bucket. Configure an AWS Lambda function to be triggered when objects are written to the S3 bucket. Have the function convert the objects into .csv format. Answer: B QUESTION 93 A data analytics specialist is building an automated ETL ingestion pipeline using AWS Glue to ingest compressed files that have been uploaded to an Amazon S3 bucket. The ingestion pipeline should support incremental data processing. Which AWS Glue feature should the data analytics specialist use to meet this requirement? A.Workflows B.Triggers C.Job bookmarks D.Classifiers Answer: B QUESTION 94 A telecommunications company is looking for an anomaly-detection solution to identify fraudulent calls. The company currently uses Amazon Kinesis to stream voice call records in a JSON format from its on- premises database to Amazon S3. The existing dataset contains voice call records with 200 columns. To detect fraudulent calls, the solution would need to look at 5 of these columns only. The company is interested in a cost-effective solution using AWS that requires minimal effort and experience in anomaly-detection algorithms. Which solution meets these requirements? A.Use an AWS Glue job to transform the data from JSON to Apache Parquet. Use AWS Glue crawlers to discover the schema and build the AWS Glue Data Catalog. Use Amazon Athena to create a table with a subset of columns. Use Amazon QuickSight to visualize the data and then use Amazon QuickSight machine learning-powered anomaly detection. B.Use Kinesis Data Firehose to detect anomalies on a data stream from Kinesis by running SQL queries, which compute an anomaly score for all calls and store the output in Amazon RDS. Use Amazon Athena to build a dataset and Amazon QuickSight to visualize the results. C.Use an AWS Glue job to transform the data from JSON to Apache Parquet. Use AWS Glue crawlers to discover the schema and build the AWS Glue Data Catalog. Use Amazon SageMaker to build an anomaly detection model that can detect fraudulent calls by ingesting data from Amazon S3. D.Use Kinesis Data Analytics to detect anomalies on a data stream from Kinesis by running SQL queries, which compute an anomaly score for all calls. Connect Amazon QuickSight to Kinesis Data Analytics to visualize the anomaly scores. Answer: A QUESTION 95 An online retailer is rebuilding its inventory management system and inventory reordering system to automatically reorder products by using Amazon Kinesis Data Streams. The inventory management system uses the Kinesis Producer Library (KPL) to publish data to a stream. The inventory reordering system uses the Kinesis Client Library (KCL) to consume data from the stream. The stream has been configured to scale as needed. Just before production deployment, the retailer discovers that the inventory reordering system is receiving duplicated data. Which factors could be causing the duplicated data? (Choose two.) A.The producer has a network-related timeout. B.The stream's value for the IteratorAgeMilliseconds metric is too high. C.There was a change in the number of shards, record processors, or both. D.The AggregationEnabled configuration property was set to true. E.The max_records configuration property was set to a number that is too high. Answer: BD QUESTION 96 A large retailer has successfully migrated to an Amazon S3 data lake architecture. The company's marketing team is using Amazon Redshift and Amazon QuickSight to analyze data, and derive and visualize insights. To ensure the marketing team has the most up-to-date actionable information, a data analyst implements nightly refreshes of Amazon Redshift using terabytes of updates from the previous day. After the first nightly refresh, users report that half of the most popular dashboards that had been running correctly before the refresh are now running much slower. Amazon CloudWatch does not show any alerts. What is the MOST likely cause for the performance degradation? A.The dashboards are suffering from inefficient SQL queries. B.The cluster is undersized for the queries being run by the dashboards. C.The nightly data refreshes are causing a lingering transaction that cannot be automatically closed by Amazon Redshift due to ongoing user workloads. D.The nightly data refreshes left the dashboard tables in need of a vacuum operation that could not be automatically performed by Amazon Redshift due to ongoing user workloads. Answer: B QUESTION 97 A marketing company is storing its campaign response data in Amazon S3. A consistent set of sources has generated the data for each campaign. The data is saved into Amazon S3 as .csv files. A business analyst will use Amazon Athena to analyze each campaign's data. The company needs the cost of ongoing data analysis with Athena to be minimized. Which combination of actions should a data analytics specialist take to meet these requirements? (Choose two.) A.Convert the .csv files to Apache Parquet. B.Convert the .csv files to Apache Avro. C.Partition the data by campaign. D.Partition the data by source. E.Compress the .csv files. Answer: BC QUESTION 98 An online retail company is migrating its reporting system to AWS. The company's legacy system runs data processing on online transactions using a complex series of nested Apache Hive queries. Transactional data is exported from the online system to the reporting system several times a day. Schemas in the files are stable between updates. A data analyst wants to quickly migrate the data processing to AWS, so any code changes should be minimized. To keep storage costs low, the data analyst decides to store the data in Amazon S3. It is vital that the data from the reports and associated analytics is completely up to date based on the data in Amazon S3. Which solution meets these requirements? A.Create an AWS Glue Data Catalog to manage the Hive metadata. Create an AWS Glue crawler over Amazon S3 that runs when data is refreshed to ensure that data changes are updated. Create an Amazon EMR cluster and use the metadata in the AWS Glue Data Catalog to run Hive processing queries in Amazon EMR. B.Create an AWS Glue Data Catalog to manage the Hive metadata. Create an Amazon EMR cluster with consistent view enabled. Run emrfs sync before each analytics step to ensure data changes are updated. Create an EMR cluster and use the metadata in the AWS Glue Data Catalog to run Hive processing queries in Amazon EMR. C.Create an Amazon Athena table with CREATE TABLE AS SELECT (CTAS) to ensure data is refreshed from underlying queries against the raw dataset. Create an AWS Glue Data Catalog to manage the Hive metadata over the CTAS table. Create an Amazon EMR cluster and use the metadata in the AWS Glue Data Catalog to run Hive processing queries in Amazon EMR. D.Use an S3 Select query to ensure that the data is properly updated. Create an AWS Glue Data Catalog to manage the Hive metadata over the S3 Select table. Create an Amazon EMR cluster and use the metadata in the AWS Glue Data Catalog to run Hive processing queries in Amazon EMR. Answer: A QUESTION 99 A media company is using Amazon QuickSight dashboards to visualize its national sales data. The dashboard is using a dataset with these fields: ID, date, time_zone, city, state, country, longitude, latitude, sales_volume, and number_of_items. To modify ongoing campaigns, the company wants an interactive and intuitive visualization of which states across the country recorded a significantly lower sales volume compared to the national average. Which addition to the company's QuickSight dashboard will meet this requirement? A.A geospatial color-coded chart of sales volume data across the country. B.A pivot table of sales volume data summed up at the state level. C.A drill-down layer for state-level sales volume data. D.A drill through to other dashboards containing state-level sales volume data. Answer: B QUESTION 100 A company hosts an on-premises PostgreSQL database that contains historical data. An internal legacy application uses the database for read-only activities. The company's business team wants to move the data to a data lake in Amazon S3 as soon as possible and enrich the data for analytics. The company has set up an AWS Direct Connect connection between its VPC and its on-premises network. A data analytics specialist must design a solution that achieves the business team's goals with the least operational overhead. Which solution meets these requirements? A.Upload the data from the on-premises PostgreSQL database to Amazon S3 by using a customized batch upload process. Use the AWS Glue crawler to catalog the data in Amazon S3. Use an AWS Glue job to enrich and store the result in a separate S3 bucket in Apache Parquet format. Use Amazon Athena to query the data. B.Create an Amazon RDS for PostgreSQL database and use AWS Database Migration Service (AWS DMS) to migrate the data into Amazon RDS. Use AWS Data Pipeline to copy and enrich the data from the Amazon RDS for PostgreSQL table and move the data to Amazon S3. Use Amazon Athena to query the data. C.Configure an AWS Glue crawler to use a JDBC connection to catalog the data in the on-premises database. Use an AWS Glue job to enrich the data and save the result to Amazon S3 in Apache Parquet format. Create an Amazon Redshift cluster and use Amazon Redshift Spectrum to query the data. D.Configure an AWS Glue crawler to use a JDBC connection to catalog the data in the on-premises database. Use an AWS Glue job to enrich the data and save the result to Amazon S3 in Apache Parquet format. Use Amazon Athena to query the data. Answer: B QUESTION 101 A medical company has a system with sensor devices that read metrics and send them in real time to an Amazon Kinesis data stream. The Kinesis data stream has multiple shards. The company needs to calculate the average value of a numeric metric every second and set an alarm for whenever the value is above one threshold or below another threshold. The alarm must be sent to Amazon Simple Notification Service (Amazon SNS) in less than 30 seconds. Which architecture meets these requirements? A.Use an Amazon Kinesis Data Firehose delivery stream to read the data from the Kinesis data stream with an AWS Lambda transformation function that calculates the average per second and sends the alarm to Amazon SNS. B.Use an AWS Lambda function to read from the Kinesis data stream to calculate the average per second and sent the alarm to Amazon SNS. C.Use an Amazon Kinesis Data Firehose deliver stream to read the data from the Kinesis data stream and store it on Amazon S3. Have Amazon S3 trigger an AWS Lambda function that calculates the average per second and sends the alarm to Amazon SNS. D.Use an Amazon Kinesis Data Analytics application to read from the Kinesis data stream and calculate the average per second. Send the results to an AWS Lambda function that sends the alarm to Amazon SNS. Answer: C QUESTION 102 An IoT company wants to release a new device that will collect data to track sleep overnight on an intelligent mattress. Sensors will send data that will be uploaded to an Amazon S3 bucket. About 2 MB of data is generated each night for each bed. Data must be processed and summarized for each user, and the results need to be available as soon as possible. Part of the process consists of time windowing and other functions. Based on tests with a Python script, every run will require about 1 GB of memory and will complete within a couple of minutes. Which solution will run the script in the MOST cost-effective way? A.AWS Lambda with a Python script B.AWS Glue with a Scala job C.Amazon EMR with an Apache Spark script D.AWS Glue with a PySpark job Answer: A QUESTION 103 A company wants to provide its data analysts with uninterrupted access to the data in its Amazon Redshift cluster. All data is streamed to an Amazon S3 bucket with Amazon Kinesis Data Firehose. An AWS Glue job that is scheduled to run every 5 minutes issues a COPY command to move the data into Amazon Redshift. The amount of data delivered is uneven throughout then day, and cluster utilization is high during certain periods. The COPY command usually completes within a couple of seconds. However, when load spike occurs, locks can exist and data can be missed. Currently, the AWS Glue job is configured to run without retries, with timeout at 5 minutes and concurrency at 1. How should a data analytics specialist configure the AWS Glue job to optimize fault tolerance and improve data availability in the Amazon Redshift cluster? A.Increase the number of retries. Decrease the timeout value. Increase the job concurrency. B.Keep the number of retries at 0. Decrease the timeout value. Increase the job concurrency. C.Keep the number of retries at 0. Decrease the timeout value. Keep the job concurrency at 1. D.Keep the number of retries at 0. Increase the timeout value. Keep the job concurrency at 1. Answer: B QUESTION 104 A retail company leverages Amazon Athena for ad-hoc queries against an AWS Glue Data Catalog. The data analytics team manages the data catalog and data access for the company. The data analytics team wants to separate queries and manage the cost of running those queries by different workloads and teams. Ideally, the data analysts want to group the queries run by different users within a team, store the query results in individual Amazon S3 buckets specific to each team, and enforce cost constraints on the queries run against the Data Catalog. Which solution meets these requirements? A.Create IAM groups and resource tags for each team within the company. Set up IAM policies that control user access and actions on the Data Catalog resources. B.Create Athena resource groups for each team within the company and assign users to these groups. Add S3 bucket names and other query configurations to the properties list for the resource groups. C.Create Athena workgroups for each team within the company. Set up IAM workgroup policies that control user access and actions on the workgroup resources. D.Create Athena query groups for each team within the company and assign users to the groups. Answer: A QUESTION 105 A manufacturing company uses Amazon S3 to store its data. The company wants to use AWS Lake Formation to provide granular-level security on those data assets. The data is in Apache Parquet format. The company has set a deadline for a consultant to build a data lake. How should the consultant create the MOST cost-effective solution that meets these requirements? A.Run Lake Formation blueprints to move the data to Lake Formation. Once Lake Formation has the data, apply permissions on Lake Formation. B.To create the data catalog, run an AWS Glue crawler on the existing Parquet data. Register the Amazon S3 path and then apply permissions through Lake Formation to provide granular-level security. C.Install Apache Ranger on an Amazon EC2 instance and integrate with Amazon EMR. Using Ranger policies, create role-based access control for the existing data assets in Amazon S3. D.Create multiple IAM roles for different users and groups. Assign IAM roles to different data assets in Amazon S3 to create table-based and column-based access controls. Answer: C QUESTION 106 A company has an application that uses the Amazon Kinesis Client Library (KCL) to read records from a Kinesis data stream. After a successful marketing campaign, the application experienced a significant increase in usage. As a result, a data analyst had to split some shards in the data stream. When the shards were split, the application started throwing an ExpiredIteratorExceptions error sporadically. What should the data analyst do to resolve this? A.Increase the number of threads that process the stream records. B.Increase the provisioned read capacity units assigned to the stream's Amazon DynamoDB table. C.Increase the provisioned write capacity units assigned to the stream's Amazon DynamoDB table. D.Decrease the provisioned write capacity units assigned to the stream's Amazon DynamoDB table. Answer: C QUESTION 107 A company is building a service to monitor fleets of vehicles. The company collects IoT data from a device in each vehicle and loads the data into Amazon Redshift in near-real time. Fleet owners upload .csv files containing vehicle reference data into Amazon S3 at different times throughout the day. A nightly process loads the vehicle reference data from Amazon S3 into Amazon Redshift. The company joins the IoT data from the device and the vehicle reference data to power reporting and dashboards. Fleet owners are frustrated by waiting a day for the dashboards to update. Which solution would provide the SHORTEST delay between uploading reference data to Amazon S3 and the change showing up in the owners' dashboards? A.Use S3 event notifications to trigger an AWS Lambda function to copy the vehicle reference data into Amazon Redshift immediately when the reference data is uploaded to Amazon S3. B.Create and schedule an AWS Glue Spark job to run every 5 minutes. The job inserts reference data into Amazon Redshift. C.Send reference data to Amazon Kinesis Data Streams. Configure the Kinesis data stream to directly load the reference data into Amazon Redshift in real time. D.Send the reference data to an Amazon Kinesis Data Firehose delivery stream. Configure Kinesis with a buffer interval of 60 seconds and to directly load the data into Amazon Redshift. Answer: A QUESTION 108 A company is migrating from an on-premises Apache Hadoop cluster to an Amazon EMR cluster. The cluster runs only during business hours. Due to a company requirement to avoid intraday cluster failures, the EMR cluster must be highly available. When the cluster is terminated at the end of each business day, the data must persist. Which configurations would enable the EMR cluster to meet these requirements? (Choose three.) A.EMR File System (EMRFS) for storage B.Hadoop Distributed File System (HDFS) for storage C.AWS Glue Data Catalog as the metastore for Apache Hive D.MySQL database on the master node as the metastore for Apache Hive E.Multiple master nodes in a single Availability Zone F.Multiple master nodes in multiple Availability Zones Answer: BCF QUESTION 109 A retail company wants to use Amazon QuickSight to generate dashboards for web and in-store sales. A group of 50 business intelligence professionals will develop and use the dashboards. Once ready, the dashboards will be shared with a group of 1,000 users. The sales data comes from different stores and is uploaded to Amazon S3 every 24 hours. The data is partitioned by year and month, and is stored in Apache Parquet format. The company is using the AWS Glue Data Catalog as its main data catalog and Amazon Athena for querying. The total size of the uncompressed data that the dashboards query from at any point is 200 GB. Which configuration will provide the MOST cost-effective solution that meets these requirements? A.Load the data into an Amazon Redshift cluster by using the COPY command. Configure 50 author users and 1,000 reader users. Use QuickSight Enterprise edition. Configure an Amazon Redshift data source with a direct query option. B.Use QuickSight Standard edition. Configure 50 author users and 1,000 reader users. Configure an Athena data source with a direct query option. C.Use QuickSight Enterprise edition. Configure 50 author users and 1,000 reader users. Configure an Athena data source and import the data into SPICE. Automatically refresh every 24 hours. D.Use QuickSight Enterprise edition. Configure 1 administrator and 1,000 reader users. Configure an S3 data source and import the data into SPICE. Automatically refresh every 24 hours. Answer: C QUESTION 110 A central government organization is collecting events from various internal applications using Amazon Managed Streaming for Apache Kafka (Amazon MSK). The organization has configured a separate Kafka topic for each application to separate the data. For security reasons, the Kafka cluster has been configured to only allow TLS encrypted data and it encrypts the data at rest. A recent application update showed that one of the applications was configured incorrectly, resulting in writing data to a Kafka topic that belongs to another application. This resulted in multiple errors in the analytics pipeline as data from different applications appeared on the same topic. After this incident, the organization wants to prevent applications from writing to a topic different than the one they should write to. Which solution meets these requirements with the least amount of effort? A.Create a different Amazon EC2 security group for each application. Configure each security group to have access to a specific topic in the Amazon MSK cluster. Attach the security group to each application based on the topic that the applications should read and write to. B.Install Kafka Connect on each application instance and configure each Kafka Connect instance to write to a specific topic only. C.Use Kafka ACLs and configure read and write permissions for each topic. Use the distinguished name of the clients' TLS certificates as the principal of the ACL. D.Create a different Amazon EC2 security group for each application. Create an Amazon MSK cluster and Kafka topic for each application. Configure each security group to have access to the specific cluster. Answer: B QUESTION 111 A company wants to collect and process events data from different departments in near-real time. Before storing the data in Amazon S3, the company needs to clean the data by standardizing the format of the address and timestamp columns. The data varies in size based on the overall load at each particular point in time. A single data record can be 100 KB-10 MB. How should a data analytics specialist design the solution for data ingestion? A.Use Amazon Kinesis Data Streams. Configure a stream for the raw data. Use a Kinesis Agent to write data to the stream. Create an Amazon Kinesis Data Analytics application that reads data from the raw stream, cleanses it, and stores the output to Amazon S3. B.Use Amazon Kinesis Data Firehose. Configure a Firehose delivery stream with a preprocessing AWS Lambda function for data cleansing. Use a Kinesis Agent to write data to the delivery stream. Configure Kinesis Data Firehose to deliver the data to Amazon S3. C.Use Amazon Managed Streaming for Apache Kafka. Configure a topic for the raw data. Use a Kafka producer to write data to the topic. Create an application on Amazon EC2 that reads data from the topic by using the Apache Kafka consumer API, cleanses the data, and writes to Amazon S3. D.Use Amazon Simple Queue Service (Amazon SQS). Configure an AWS Lambda function to read events from the SQS queue and upload the events to Amazon S3. Answer: B QUESTION 112 An operations team notices that a few AWS Glue jobs for a given ETL application are failing. The AWS Glue jobs read a large number of small JOSN files from an Amazon S3 bucket and write the data to a different S3 bucket in Apache Parquet format with no major transformations. Upon initial investigation, a data engineer notices the following error message in the History tab on the AWS Glue console: "Command Failed with Exit Code 1." Upon further investigation, the data engineer notices that the driver memory profile of the failed jobs crosses the safe threshold of 50% usage quickly and reaches 90?5% soon after. The average memory usage across all executors continues to be less than 4%. The data engineer also notices the following error while examining the related Amazon CloudWatch Logs. What should the data engineer do to solve the failure in the MOST cost-effective way? A.Change the worker type from Standard to G.2X. B.Modify the AWS Glue ETL code to use the `groupFiles': `inPartition' feature. C.Increase the fetch size setting by using AWS Glue dynamics frame. D.Modify maximum capacity to increase the total maximum data processing units (DPUs) used. Answer: D QUESTION 113 A transport company wants to track vehicular movements by capturing geolocation records. The records are 10 B in size and up to 10,000 records are captured each second. Data transmission delays of a few minutes are acceptable, considering unreliable network conditions. The transport company decided to use Amazon Kinesis Data Streams to ingest the data. The company is looking for a reliable mechanism to send data to Kinesis Data Streams while maximizing the throughput efficiency of the Kinesis shards. Which solution will meet the company's requirements? A.Kinesis Agent B.Kinesis Producer Library (KPL) C.Kinesis Data Firehose D.Kinesis SDK Answer: B 2021 Latest Braindump2go DAS-C01 PDF and DAS-C01 VCE Dumps Free Share: https://drive.google.com/drive/folders/1WbSRm3ZlrRzjwyqX7auaqgEhLLzmD-2w?usp=sharing
三年かけて描く物語「精霊の守り人」2016年3月19日スタート!
「放送90年大河ファンタジー」はNHKの本気印なのかもしれません。 情報はいろいろ出ていましたが、ついに放送開始です。 なんと、「守り人シリーズ」外伝含めた12巻を3年かけて全22話で描くそうで、ものすごく壮大な物語になりそうです。 アニメも原作も読んでいないので、今回のドラマ版、主役バルサ役のはるかちゃんの姿でSFファンタジーの世界に入り込んでみたいと思います。 「精霊の守り人」公式サイト http://www.nhk.or.jp/moribito/ 驚いたのは原作者、上橋菜穂子さんのメッセージでの鋭いツッコミ。 原作読者の中には、綾瀬はるかさんのような若くてきれいな人がバルサを演じることを疑問に思う方もおられるかもしれませんね。でも、綾瀬さんは、今年、バルサと同じ30歳です。確かに、シーズン1では、成熟したバルサというより、少年のようなバルサですが、ここには、ひとつの意図があるのです。 このドラマは、3年かけて放送されます。原作のバルサは、登場時すでにかなり成熟している “オバサン”ともいえる大人の女性ですが、そこからのスタートだと“伸びしろ”がないのです。 こういうところ大事ですよね(笑)。上橋さんも本作の制作に「かなりしっかり」関わっていらっしゃるそうで、ファンも安心ですし、精霊の守り人の魅力が単なるアクションファンタジーで終わらない気がします。 でもでもウェブサイト観てください。はるかちゃんのお顔が「龍馬伝」の岩崎弥太郎みたいになっています。映像を観たら納得なんだろうけど、最近こういうの見ないので新鮮。 たのしみです! 守り人シリーズ公式サイト http://www.kaiseisha.co.jp/special/moribito/ 上橋菜穂子の実況レポートNHKドラマ 「精霊の守り人」 http://www.kaiseisha.co.jp/special/moribito/report/02.html アニメ版は『攻殻機動隊 STAND ALONE COMPLEX』の神山健治監督が手がけています。 http://www.moribito.com/ 今わかっている予定です。 ◆第1回 「女用心棒バルサ」 総合2016年3月19日(土)夜9時から10時13分 【再放送】 総合2016年3月26日(土)夜0時10分から翌1時23分[25日(金)深夜] ◆第2回 「王子に宿りしもの」 総合2016年3月26日(土)夜9時から9時58分 【再放送】 総合2016年4月2日(土)夜0時10分から翌1時8分[1日(金)深夜] ◆第3回 「冬ごもりの誓い」 総合2016年4月2日(土)夜9時から9時58分 【再放送】 総合2016年4月9日(土)夜0時10分から翌1時8分[8日(金)深夜] ◆第4回 「決戦のとき」 総合2016年4月9日(土)夜9時から9時58分 【再放送】 総合2016年4月16日(土)夜0時10分から翌1時8分[15日(金)深夜]
R2-D2型移動式冷蔵庫が欲しい!けど…
12月のスターウォーズ祭りを盛り上げるべく、いろいろとスターウォーズコラボ製品が出てきています。その中でも私が欲しいのはコレ! 写真を見ればすぐにわかる、R2-D2の形をした冷蔵庫です。 中国のメーカー「ハイアール」が発表していたものが、今日の昼から予約開始だそうです。 https://otakumode.com/shop/haier/aqua/R2-D2-Moving-Refrigerator ただ、問題になるのはその価格なんですよ。その値段はなんと 998,000 円(税抜)なんです。税金を入れると 1,077,840 円という 100 万円オーバーの超弩級価格の冷蔵庫です。 もちろん、100万円という価格に見合うのが商品力。「R2-D2」を等身大(高さ95センチ)で再現し、胴体部分に保冷機能を内蔵。缶やペットボトルなどを冷やせる移動式。赤外線利用のコントローラで前後左右に操作でき、頭部にはプロジェクターを内蔵し、ワイヤレス技術「Miracast」でスマートフォンなどに表示した映像の投影も可能。おお、すごい! でも、肝心の冷蔵庫としては缶飲料が6本入る程度の容量です。 だったら、缶一本だけ冷やすことのできるダースベイダー冷蔵庫(39,800円)の方がまだ現実的な感じがしますね。 等身大「R2-D2」冷蔵庫に、缶1本を冷やすダース・ベイダーのマスクも──ハイアール「スター・ウォーズ」コラボで“日本家電の覚醒” http://www.itmedia.co.jp/news/articles/1510/29/news089.html
애플, 3년여 만에 아이패드 미니 & 에어 신제품 공개
애플 펜슬 1세대가 지원되는 애플(Apple)이 깜짝 신제품을 발표했다. 이전과 달리 이례적으로 조용히 공개된 제품은 2015년 9월 이후 3년 반 만에 선보이는 ‘아이패드 미니’와 ‘아이패드 에어’ 2가지. 7.9인치의 콤팩트한 사이즈의 ‘아이패드 미니’는 지난해 출시된 아이폰 XS/XR과 동일한 A12 바이오닉 칩을 적용해 시리즈 중 가장 높은 화소 집적도를 자랑한다. 또한 주변 광량에 따라 색온도를 자동으로 조정하는 트루론 기술이 추가돼 시각적 편안함을 제공하기도. 3세대 ‘아이패드 에어’는 10.5인치 레티나 디스플레이를 탑재하고도 500g도 채 안 되는 가벼운 무게가 특징. A12 바이오닉 칩과 뉴럴 엔진을 장착해 전작 대비 70% 향상된 성능으로 강력한 멀티플레이 기능을 과시한다. 두 모델은 모두 애플 펜슬이 지원되며 저장 공간은 64GB, 256GB 2가지. 컬러웨이는 실버, 골드, 스페이스 그레이 등으로 가격은 미니 49만 9천 원, 에어 62만 9천 원부터 시작한다. 유심칩이 지원되는 셀룰러를 추가하면 약 17만 원 상향. 애플의 혁신적인 기술로 구성된 이번 시리즈는 현재 미국과 일본, 캐나다 등에서 바로 구매 가능하며 다음 주부터 중국에서도 만나볼 수 있다. 국내 출시 시기는 미정. 자세한 사양은 이곳에서 확인하길. 더 자세한 내용은 <아이즈매거진>링크에서
圏外になれちゃう電波遮断ケース「Yondr」って知ってる?
以前から、通信可能地域が増えるほど、どんどん圏外である空間が貴重になってくるという話があるけど、この「Yondr」はスマホをこのポケットに入れるだけで電波を遮断しちゃうらしい!能動的に「圏外」を作れちゃうんです。こういう「圏外」つくれる村や装置に興味があったので、ちょっとときめく。w 公式サイト:http://overyondr.com/ 公式サイトは英語だし、あまり詳細が書いていないので、Yondrに関する他の記事も見つけてみた。 今のところの用途は、コンサートとかコメディショー、演劇とかを見に行った時に観客にスマホを入れてもらって演目に集中してもらうこと。このYondrを採用したコンサートなどでは、上の図のようにスマホを使えないゾーンと使えるゾーンを分けている。 (左側が使えないゾーン、右側が使えるゾーン。) コンサート参加者は、左側の使えないゾーンに入ると一切スマホを使えなくなるが、必要であれば右側のゾーンに出てスマホをオンラインにすることもできる。 実際にこのYondrを使用したコンサートもサンフランシスコでいくつかあるようで、今後の展開が楽しみ!!圏外になる不安に慣れてしまっている現代人がどうなるのか、ちょっと見たい気がするw 参考記事: http://www.geekwire.com/2015/meet-yondr-the-company-that-wants-you-to-put-your-phone-away-and-enjoy-the-show/
遂にオープンしたamazon books!その実態に迫る。
 数々の本屋に大打撃を与え続けてきたamazonであるが、 11/3に実店舗である「amazon books」をアメリカ、シアトルにオープンした。 何故今になって実店舗?という疑問が浮かぶが、amazon側としては ”ハードウェアやコンテンツ配信で急速に拡大している同社エコシステムの拠点 として打ち出したいようだ。” http://wired.jp/2015/11/06/amazons-first-store/ より引用 「Fire TV」や、「Fire」タブレット などを実際に手にとって利用することが出来る上に、ここに売られている商品は実際のamazonと同じ値段であるというメリットがある。また 「Amazon Prime Music」 からの音楽がスピーカーから流れていたり、 「Amazon Echo」 といういわばiPhoneにおける「siri」のような会話の出来る人工知能を体験出来るなど非常にコンテンツが盛り沢山となっている。  しかしながら一般的に本屋で求められる静かな空間で本を手にとって試しに読んでみたり、のんびりと本を適当に探したり、といった要素はこの「amazon books」にはなさそうである(そもそもここが本屋であるのかすらあやしい)のでその点は個人的にではあるがあまり足を運ぶ気になれない本屋かなとも思う。 とはいえ安くバリエーション豊かな商品が並ぶ空間というものは少なからず楽しく過ごせるものであるし、シアトルに仮に足を運んだ場合はチョロっと10分ほど訪れてみたいような場所かな、という気もした。
(April-2021)Braindump2go 5V0-34.19 PDF and 5V0-34.19 VCE Dumps(Q29-Q49)
QUESTION 29 A user wants to create a super metric and apply it to a custom group to capture the total of CPU Demand (MHz) of virtual machines that are children of the custom group. Which super metric function would be used to accomplish this? A.Average B.Max C.Sum D.Count Answer: C QUESTION 30 Review the exhibit. When the Cluster Metric Load or Cluster Object Load exceeds 100%, what is the next step a vRealize Operations administrator should take? A.Reduce the vRealize Operations data retention time. B.Add an additional vRealize Operations data node. C.Increase vRealize Operations polling time. D.Remove a vCenter from the vSphere management pack. Answer: B QUESTION 31 Which object attributes are used in vRealize Operations Compliance analysis? A.tags B.properties C.user access lists D.host profiles Answer: B QUESTION 32 Based on the highlighted HIPPA compliance template above, how many hosts are in a compliant state? A.5 B.24 C.29 D.31 Answer: A QUESTION 33 How can vRealize Operations tags be used? A.be dynamically assigned to objects B.to group virtual machines in vCenter C.to set object access controls D.to filter objects within dashboard widgets Answer: B QUESTION 34 The default collection cycle is set. When changing the Cluster Time Remaining settings, how long will it take before time remaining and risk level are recalculated? A.5 minutes B.1 hour C.12 hours D.24 hours Answer: A QUESTION 35 What is a prerequisite for using Business Intent? A.DRS clusters B.storage policies C.vSphere 6.7 D.vCenter tags Answer: D QUESTION 36 What can be configured within a policy? A.alert notifications B.symptom definition threshold overrides C.custom group membership criteria D.symptom definition operator overrides Answer: B QUESTION 37 Which organizational construct within vRealize Operations has a user-configured dynamic membership criteria? A.Resource Pool B.Tags C.Custom group D.Custom Datacenter Answer: C QUESTION 38 How should a remote collector be added to a vRealize Operations installation? A.Log in as Admin on a master node and enable High Availability. B.Open the Setup Wizard from the login page. C.Navigate to a newly deployed node and click Expand an Existing Installation. D.Navigate to the Admin interface of a data node. Answer: C QUESTION 39 Refer to the exhibit. How is vSphere Usable Capacity calculated? A.Demand plus Reservation B.Total Capacity minus High Availability C.Total Capacity minus Overhead D.Demand plus High Availability Answer: B QUESTION 40 A view is created in vRealize Operations to track virtual machine maximum and average contention for the past thirty days. Which method is used to enhance the view to easily spot VMs with high contention values? A.Set a tag on virtual machines and filter on the tag. B.Edit the view and set filters for the transformation value maximum and average contention. C.Create a custom group to dynamically track virtual machines. D.Configure Metric Coloring in the Advanced Settings of the view. Answer: C QUESTION 41 Refer to the exhibit. A user has installed and configured Telegraf agent on a Windows domain controller. No application data is being collected. Which two actions should the user take to see the application data? (Choose two.) A.Verify the vCenter adapter collection status. B.Re-configure the agent on the Windows virtual machine manually. C.Verify Active Directory Service status. D.Configure ICMP Remote Check. E.Validate time synchronization between vRealize Application Remote Collector and vRealize Operations. Answer: AE QUESTION 42 Which dashboard widget provides a two-dimensional relationship? A.Heat Map B.Object Selector C.Scoreboard D.Top N Answer: A QUESTION 43 What must an administrator do to use the Troubleshoot with Logs Dashboard in vRealize Operations? A.Configure the vRealize Log Insight agent. B.Enable Log Forwarding within vRealize Operations. C.Configure vRealize Operations within vRealize Log Insight. D.Configure symptoms and alerts within vRealize Operations. Answer: C QUESTION 44 vRealize Operations places a tagless virtual machines on a tagged host. Which setting causes this behavior? A.Host-Based Business Intent B.Consolidated Operational Intent C.Balanced Operational Intent D.Cluster-Based Business Intent Answer: A QUESTION 45 The default collection cycle is set. How often are cost calculations run? A.every 5 minutes B.daily C.weekly D.monthly Answer: B QUESTION 46 vRealize Operations is actively collecting data from vCenter and the entire inventory is licensed. Why would backup VMDKs of an active virtual machine in the vCenter appear in Orphaned Disks? A.They are related to the VM. B.They are named the same as the VM. C.They are not in vCenter inventory. D.They are not actively being utilized. Answer: C QUESTION 47 In which two locations should all nodes be when deploying an analytics node? (Choose two.) A.same data center B.same vCenter C.remote data center D.same subnet E.different subnet Answer: AD QUESTION 48 Which type of view allows a user to create a view to provide tabular data about specific objects? A.Distribution B.Text C.List D.Trend Answer: C QUESTION 49 Which Operational Intent setting drives maximum application performance by avoiding resource spikes? A.Moderate B.Consolidate C.Over provision D.Balance Answer: B 2021 Latest Braindump2go 5V0-34.19 PDF and 5V0-34.19 VCE Dumps Free Share: https://drive.google.com/drive/folders/1i-g5X8oxKPFi-1oyAVi68bVlC5njt8PF?usp=sharing
シャープ再建、いっそキャッチフレーズも戻らないかな…
シャープ 台湾・ホンハイ傘下で再建の方針決定 この二日後には早速ホンハイと不協和音という報道も出ていましたが… 外国資本傘下に不安を感じるのもわからないでもないけど、仮に出資額があまり変わらないとしても産業革新機構だからいいとも全然限らない気がしているので、ここはもうひたすら交渉をうまく乗り切っていい状態で再建スタートできるようにと思ってます。個人的にも株が買いたかったくらい好きなメーカーなので、とにかく会社もブランドもうまく残してほしい。 いまさら懐かしいのを引っ張り出してきましたが、やっぱり「目の付けどころが、シャープでしょ。」の頃が一番好きだった気がします。再建第一歩の象徴としてキャッチフレーズをこれに戻すとかになったらもっと応援しちゃうんだけどな〜。 シャープとは話がずれるけど、あらためてやっぱり仲畑さん(仲畑貴志)はすごいですね。 ちなみにシャープと言えばだいたい液晶と亀山工場の話しか出てきませんが、実は私はフレームがシルバーだった頃までしかシャープの液晶買ってません。今思うと浮気してごめんとも思いますが、なんかその頃から液晶頑張り過ぎてトゥーマッチ感があって。 ただ、空気清浄機と冷蔵庫はいつまでも絶対シャープです。あの両開きドア(どっちもドア)の冷蔵庫は何台も買い替えましたが、本当にあれがなくなると不便でしょうがないので、わがままですがこの二つのためにシャープに残ってほしい。出勤退勤時に社員がみんな冷蔵庫を開け閉めして耐久テストしたエピソード、大好きです。 液晶部門は大型投資受けて別会社として株式会社AQUOSとか株式会社KAMEYAMAにし、引換に家電メーカーシャープを小規模安定再建させるとかで個人的には全然構わない。(でもだからといってジャパンディスプレイでいいの?という気はする) SANYOは結局パナソニックに吸収されましたが、シャープはシャープとしてシャープらしさを保ったまま是非頑張ってほしいです。大巨塔が1つ2つ生き残る昭和の延命みたいな状態より、こういう会社がたくさん頑張ってる方が現代的なんじゃないかなあ。
(April-2021)Braindump2go 1Y0-231 PDF and 1Y0-231 VCE Dumps(Q21-Q41)
Question: 21 Scenario: A Citrix Administrator needs to test a SAML authentication deployment to be used by internal users while accessing several externally hosted applications. During testing, the administrator notices that after successfully accessing any partner application, subsequent applications seem to launch without any explicit authentication request. Which statement is true regarding the behavior described above? A.It is expected if the Citrix ADC appliance is the common SAML identity provider (IdP) for all partners. B.It is expected due to SAML authentication successfully logging on to all internal applications. C.It is expected if all partner organizations use a common SAML service provider (SP). D.It indicates the SAML authentication has failed and the next available protocol was used. Answer: B Question: 22 Scenario: A Citrix Administrator configured SNMP to send traps to an external SNMP system. When reviewing the messages, the administrator notices several entity UP and entity DOWN messages. To what are these messages related? A.Load-balancing virtual servers B.SSL certificate C.VLAN D.High availability nodes Answer: A Question: 23 Scenario: A Citrix Administrator configured a new router that requires some incoming and outgoing traffic to take different paths through it. The administrator notices that this is failing and runs a network trace. After a short monitoring period, the administrator notices that the packets are still NOT getting to the new router from the Citrix ADC. Which mode should the administrator disable on the Citrix ADC to facilitate the successful routing of the packets? A.Layer3 B.USNIP C.MAC-based forwarding (MBF) D.USIP Answer: C Question: 24 A Citrix Administrator needs to configure a Citrix ADC high availability (HA) pair with each Citrix ADC in a different subnet. What does the administrator need to do for HA to work in different subnets? A.Configure SyncVLAN B.Turn on Independent Network Configuration (INC) mode C.Turn on HA monitoring on all Interfaces D.Turn on fail-safe mode Answer: B Question: 25 Scenario: A Citrix Administrator is managing a Citrix Gateway with a standard platform license and remote employees in the environment. The administrator wants to increase access by 3,000 users through the Citrix Gateway using VPN access. Which license should the administrator recommend purchasing? A.Citrix Gateway Express B.Citrix ADC Upgrade C.Citrix Gateway Universal D.Citrix ADC Burst Pack Answer: C Reference: https://support.citrix.com/content/dam/supportWS/kA560000000TNDvCAO/XD_and_XA_7.x_Licens ing_FAQ.pdf Question: 26 Which four steps should a Citrix Administrator take to configure SmartAccess? (Choose four.) A.Execute “set-BrokerSite -TrustRequestsSentToTheXMLServicePort $True” on any Citrix Delivery Controller in the Site. B.Enable Citrix Workspace control within StoreFront. C.Ensure that the SmartAccess filter name on the Delivery Group matches the name of the Citrix Gateway virtual server. D.Ensure that the SmartAccess filter name on the Delivery Group matches the name of the Citrix Gateway policy. E.Ensure that ICA Only is unchecked on the Citrix Gateway virtual server. F.Ensure that the Callback URL is defined in the Citrix Gateway configuration within Store Front. G.Ensure that ICA Only is checked on the Citrix Gateway virtual server. Answer: ACEF Reference: https://support.citrix.com/article/CTX227055 Question: 27 Which three Citrix Gateway elements can be configured by the Citrix Gateway Wizard? (Choose three.) A.The rewrite policy for HTTP to HTTPS redirect B.The responder policy for HTTP to HTTPS redirect C.The Citrix Gateway primary virtual server D.The bind SSL server certificate for the Citrix Gateway virtual server E.The primary and optional secondary authentications Answer: CDE Reference: https://docs.citrix.com/en-us/citrix-gateway/12-1/citrix-gateway-12.1.pdf (333) Question: 28 Scenario: A Citrix Administrator configures an access control list (ACL) to block traffic from the IP address 10.102.29.5: add simpleacl rule1 DENY -srcIP 10.102.29.5 A week later, the administrator discovers that the ACL is no longer present on the Citrix ADC. What could be the reason for this? A.The administrator did NOT run the apply ACL command. B.The simple ACLs remain active for only 600 seconds. C.The simple ACLs remain active for only 60 seconds. D.The Citrix ADC has been restarted without saving the configurations. Answer: A Question: 29 While applying a new Citrix ADC device, a Citrix Administrator notices an issue with the time on the appliance. Which two steps can the administrator perform to automatically adjust the time? (Choose two.) A.Add an SNMP manager. B.Add an SNMP trap. C.Enable NTP synchronization. D.Add an NTP server. E.Configure an NTP monitor. Answer: CE Question: 30 A Citrix Network Engineer informs a Citrix Administrator that a data interface used by Citrix ADC SDX is being saturated. Which action could the administrator take to address this bandwidth concern? A.Add a second interface to each Citrix ADC VPX instance. B.Configure LACP on the SDX for management interface. C.Configure LACP on the SDX for the data interface. D.Configure a failover interface set on each Citrix ADC VPX instance. Answer: C Reference: https://training.citrix.com/public/Exam+Prep+Guides/241/1Y0- 241_Exam_Preparation_Guide_v01.pdf (22) Question: 31 Scenario: Users are attempting to logon through Citrix Gateway. They successfully pass the Endpoint Analysis (EPA) scan, but are NOT able to see the client choices at logon. What can a Citrix Administrator disable to allow users to see the client choices at logon? A.Quarantine groups B.Client choices globally C.Split tunneling D.nFactor authentication Answer: A Reference: https://www.carlstalhood.com/category/netscaler/netscaler-12/netscaler-gateway-12/ Question: 32 Scenario: To meet the security requirements of the organization, a Citrix Administrator needs to configure a Citrix Gateway virtual server with time-outs for user sessions triggered by the behaviors below: Inactivity for at least 15 minutes. No keyboard or mouse activity for at least 15 minutes Which set of time-out settings can the administrator configure to meet the requirements? A.Session time-out and client idle time-out set to 15 B.Session time-out and forced time-out set to 15 C.Client idle time-out and forced time-out set to 15 D.Client idle time-out and forced time-out set to 900 Answer: A Reference: https://docs.citrix.com/en-us/citrix-gateway/current-release/vpn-user-config/configure- pluginconnections/configure-time-out-settings.html Question: 33 A Citrix Administrator needs to configure a Citrix Gateway virtual IP to allow incoming connections initiated exclusively from web browser sessions. Which advanced policy will accomplish this? A.REQ.HTTP.HEADER User-Agent NOTCONTAINS CitrixReceiver B.REQ.HTTP.HEADER User-Agent CONTAINS Chrome/78.0.3904.108 Safari/537.36 C.HTTP.REQ.HEADER(“User-Agent”).CONTAINS(“Mozilla”) D.HTTP.REQ.HEADER(“User-Agent”).CONTAINS(“CitrixReceiver”) Answer: A Reference: https://stalhood2.rssing.com/chan-58610415/all_p2.html Question: 34 Scenario: A Citrix Administrator currently manages a Citrix ADC environment for a growing retail company that may soon double its business volume. A Citrix ADC MPX 5901 is currently handling web and SSL transactions, but is close to full capacity. Due to the forecasted growth, the administrator needs to find a costeffective solution. Which cost-effective recommendation can the administrator provide to management to handle the growth? A.A license upgrade to a Citrix ADC MPX 5905 B.The addition of another MPX 5901 appliance C.A hardware upgrade to a Citrix ADC MPX 8905 D.A hardware upgrade to a Citrix ADC SDX 15020 Answer: A Question: 35 What can a Citrix Administrator configure to access RDP shortcuts? A.Split tunneling B.Bookmarks C.Next hop server D.Intranet applications Answer: B Reference: https://docs.citrix.com/en-us/citrix-gateway/current-release/rdp-proxy.html Question: 36 If a user device does NOT comply with a company’s security requirements, which type of policy can a Citrix Administrator apply to a Citrix Gateway virtual server to limit access to Citrix Virtual Apps and Desktops resources? A.Session B.Responder C.Authorization D.Traffic Answer: A Reference:https://www.citrix.com/content/dam/citrix/en_us/documents/products- solutions/creating-andenforcing-advanced-access-policies-with-xenapp.pdf Question: 37 A Citrix Administrator has received a low disk space alert for /var on the Citrix ADC. Which type of files should the administrator archive to free up space? A.Syslog B.Nslog C.DNScache D.Nsconfig Answer: B Reference: https://support.citrix.com/article/CTX205014?recommended Question: 38 Which license type must be installed to configure Endpoint Analysis scans? A.Citrix Web App Firewall B.Universal C.Platform D.Burst pack Answer: B Reference:https://docs.citrix.com/en-us/citrix-gateway/current-release/citrix-gateway-licensing.html Question: 39 Which two features can a Citrix Administrator use to allow secure external access to a sensitive company web server that is load-balanced by the Citrix ADC? (Choose two.) A.Authentication, authorization, and auditing (AAA) B.Citrix Web App Firewall C.ICA proxy D.AppFlow E.Integrated caching Answer: AB Question: 40 Scenario: A Citrix ADC MPX is using one of four available 10G ports. A Citrix Administrator discovers a traffic bottleneck at the Citrix ADC. What can the administrator do to increase bandwidth on the Citrix ADC? A.Add two more 10G Citrix ADC ports to the network and configure VLAN. B.Add another 10G Citrix ADC port to the switch, and configure link aggregation control protocol (LACP). C.Purchase another Citrix ADC MPX appliance. D.Plug another 10G Citrix ADC port into the router. Answer: A Question: 41 Scenario: Client connections to certain virtual servers are abnormally high. A Citrix Administrator needs to be alerted whenever the connections pass a certain threshold. How can the administrator use Citrix Application Delivery Management (ADM) to accomplish this? A.Configure TCP Insight on the Citrix ADM. B.Configure SMTP reporting on the Citrix ADM by adding the threshold and email address. C.Configure specific alerts for virtual servers using Citrix ADM. D.Configure network reporting on the Citrix ADM by setting the threshold and email address. Answer: D 2021 Latest Braindump2go 1Y0-231 PDF and 1Y0-231 VCE Dumps Free Share: https://drive.google.com/drive/folders/1QWBrUQIP4dhwazi-gFooYmyX1m-iWAlw?usp=sharing
坂口恭平全国ツアー決定!その驚きの手段とは?
坂口恭平をご存知だろうか? 「建てない」建築家、作詞家、作曲家、絵描き、作家、元新政府初代内閣総理大臣(!)など彼の肩書きは数知れない。また躁鬱病患者でもある関係から、Twitterなどで自身の電話番号を公開し、「いのちの電話」という名の下に自殺願望のある人々の話を聞く活動も注目を集めている。 そんな彼だが今年新しい音楽集(彼はそう称する)である『新しい花』をリリースした。その内容は以下 http://www.audiomack.com/album/kyohei-sakaguchi/atarashii-hana あるいはapple musicなどで聞くことができる。 あた、上記作品のリリースに伴って全国ツアーをするそうだがその手法が実に画期的である。 ツアー先でライブハウスなどを借りるといったことはしないようであり公園などでライブをするようだ。 またツアー先での宿泊場所はファンやtwitterでの呼びかけに反応をくれた人の家になるようである。移動費だけで全国ツアーを敢行しようというそのバイタリティと発想が実に彼らしい。あなたの街に坂口恭平が全国ツアーで訪れた際には是非訪れることをお勧めする。彼の新しい視点や思想に少なからず刺激を受けるはずだ。 最後に坂口恭平が以前リリースしたアルバム『Practice for a Revolution』から一曲を紹介して当記事を終えたい。このタイトルといい、著作権的な問題で明らかにアウトにしか思えない内容を平気でリリースしてしまうあたりに彼のなんとも言えない魅力を感じないだろうか?